[jira] [Created] (YARN-4691) Cache resource usage at FSLeafQueue level

2016-02-11 Thread Ming Ma (JIRA)
Ming Ma created YARN-4691:
-

 Summary: Cache resource usage at FSLeafQueue level
 Key: YARN-4691
 URL: https://issues.apache.org/jira/browse/YARN-4691
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ming Ma


As part of the fair share assignment, fair scheduler needs to sort queues to 
decide which queue is furthest away from its fair share. During the sorting, 
the comparator needs to get the Resource usage of each queue.

The parent queue will aggregate the resource usage from leaf queues. The leaf 
queue will aggregate the resource usage from all apps in the queue.

{noformat}
FSLeafQueue.java
  @Override
  public Resource getResourceUsage() {
Resource usage = Resources.createResource(0);
readLock.lock();
try {
  for (FSAppAttempt app : runnableApps) {
Resources.addTo(usage, app.getResourceUsage());
  }
  for (FSAppAttempt app : nonRunnableApps) {
Resources.addTo(usage, app.getResourceUsage());
  }
} finally {
  readLock.unlock();
}
return usage;
  }
{noformat}

Each time fair scheduler tries to assign a container, it needs to sort all 
queues. Thus the number of Resources.addTo operations will be 
(number_of_queues) * lg(number_of_queues) *  number_of_apps_per_queue, or 
number_of_apps_on_the_cluster * lg(number_of_queues).

One way to solve this is to cache the resource usage at FSLeafQueue level. Each 
time fair scheduler updates FSAppAttempt's resource usage, it will update 
FSLeafQueue resource usage. This will greatly reduce the overall number of 
Resources.addTo operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4690) Skip object allocation in FSAppAttempt#getResourceUsage when possible

2016-02-11 Thread Ming Ma (JIRA)
Ming Ma created YARN-4690:
-

 Summary: Skip object allocation in FSAppAttempt#getResourceUsage 
when possible
 Key: YARN-4690
 URL: https://issues.apache.org/jira/browse/YARN-4690
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Ming Ma


YARN-2768 addresses an important bottleneck. Here is another similar instance 
where object allocation in Resources#subtract will slow down the fair 
scheduler's event processing thread.

{noformat}
org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecordInstance(RecordFactoryPBImpl.java)
org.apache.hadoop.yarn.util.Records.newRecord(Records.java)

org.apache.hadoop.yarn.util.resource.Resources.createResource(Resources.java)
org.apache.hadoop.yarn.util.resource.Resources.clone(Resources.java)
org.apache.hadoop.yarn.util.resource.Resources.subtract(Resources.java)

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt.getResourceUsage(FSAppAttempt.java)

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.getResourceUsage(FSLeafQueue.java)

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy$FairShareComparator.compare(FairSharePolicy.java)

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy$FairShareComparator.compare(FairSharePolicy.java)
java.util.TimSort.binarySort(TimSort.java)
java.util.TimSort.sort(TimSort.java)
java.util.TimSort.sort(TimSort.java)
java.util.Arrays.sort(Arrays.java)
java.util.Collections.sort(Collections.java)

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSParentQueue.assignContainer(FSParentQueue.java)

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java)

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java)

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java)

org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java)

org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.handle(ResourceSchedulerWrapper.java)

org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.handle(ResourceSchedulerWrapper.java)
{noformat}

One way to fix it is to return {{getCurrentConsumption()}} if there is no 
preemption which is the normal case. This means {{getResourceUsage}} method 
will return reference to {{FSAppAttempt}}'s internal resource object. But that 
should be ok as {{getResourceUsage}} doesn't expect the caller to modify the 
object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4684) TestYarnCLI#testGetContainers failing in CN locale

2016-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144176#comment-15144176
 ] 

Hudson commented on YARN-4684:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9291 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9291/])
YARN-4684. TestYarnCLI#testGetContainers failing in CN locale. (vvasudev: rev 
2fb423e195fb1d525304c40ff93966525efb640d)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
* hadoop-yarn-project/CHANGES.txt


> TestYarnCLI#testGetContainers failing in CN locale
> --
>
> Key: YARN-4684
> URL: https://issues.apache.org/jira/browse/YARN-4684
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4684.patch, 0002-YARN-4684.patch
>
>
> TestYarnCLI#testGetContainers failing in CN locale 
> {noformat}
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03?? ??? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - 
> OutputFrom command
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2768) Avoid cloning Resource in FSAppAttempt#updateDemand

2016-02-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144100#comment-15144100
 ] 

Sangjin Lee commented on YARN-2768:
---

This looks like a great candidate to backport to 2.7.x and 2.6.x.

> Avoid cloning Resource in FSAppAttempt#updateDemand
> ---
>
> Key: YARN-2768
> URL: https://issues.apache.org/jira/browse/YARN-2768
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Hong Zhiguo
>Assignee: Hong Zhiguo
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: YARN-2768.patch, profiling_FairScheduler_update.png
>
>
> See the attached picture of profiling result. The clone of Resource object 
> within Resources.multiply() takes up **85%** (19.2 / 22.6) CPU time of the 
> function FairScheduler.update().
> The code of FSAppAttempt.updateDemand:
> {code}
> public void updateDemand() {
> demand = Resources.createResource(0);
> // Demand is current consumption plus outstanding requests
> Resources.addTo(demand, app.getCurrentConsumption());
> // Add up outstanding resource requests
> synchronized (app) {
>   for (Priority p : app.getPriorities()) {
> for (ResourceRequest r : app.getResourceRequests(p).values()) {
>   Resource total = Resources.multiply(r.getCapability(), 
> r.getNumContainers());
>   Resources.addTo(demand, total);
> }
>   }
> }
>   }
> {code}
> The code of Resources.multiply:
> {code}
> public static Resource multiply(Resource lhs, double by) {
> return multiplyTo(clone(lhs), by);
> }
> {code}
> The clone could be skipped by directly update the value of this.demand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2046) Out of band heartbeats are sent only on container kill and possibly too early

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144093#comment-15144093
 ] 

Hadoop QA commented on YARN-2046:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 1 new + 174 unchanged - 0 fixed = 175 total (was 174) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 15s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 48s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787615/YARN-2046-5.patch |
| JIRA Issue | YARN-2046 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux af852b9ccdde 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patc

[jira] [Updated] (YARN-2046) Out of band heartbeats are sent only on container kill and possibly too early

2016-02-11 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated YARN-2046:
--
Attachment: YARN-2046-5.patch

Thanks [~jlowe]! Here is the updated patch with your suggestion.

> Out of band heartbeats are sent only on container kill and possibly too early
> -
>
> Key: YARN-2046
> URL: https://issues.apache.org/jira/browse/YARN-2046
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 0.23.10, 2.4.0
>Reporter: Jason Lowe
>Assignee: Ming Ma
>  Labels: BB2015-05-RFC
> Attachments: YARN-2046-2.patch, YARN-2046-3.patch, YARN-2046-4.patch, 
> YARN-2046-5.patch, YARN-2046.patch
>
>
> [~mingma] pointed out in the review discussion for MAPREDUCE-5465 that the NM 
> is currently sending out of band heartbeats only when stopContainer is 
> called.  In addition those heartbeats might be sent too early because the 
> container kill event is asynchronously posted then the heartbeat monitor is 
> notified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4676) Automatic and Asynchronous Decommissioning Nodes Status Tracking

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144045#comment-15144045
 ] 

Hadoop QA commented on YARN-4676:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 15s 
{color} | {color:red} root: patch generated 134 new + 787 unchanged - 2 fixed = 
921 total (was 789) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 18 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 19s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 1s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 49s 
{color} | {color:green} hadoop-yarn-serv

[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15144003#comment-15144003
 ] 

Hadoop QA commented on YARN-4205:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-4205 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12764043/YARN-4205_03.patch |
| JIRA Issue | YARN-4205 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/10561/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: nijel
> Attachments: YARN-4205_01.patch, YARN-4205_02.patch, 
> YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4205) Add a service for monitoring application life time out

2016-02-11 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-4205:

Target Version/s: 2.9.0

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: nijel
> Attachments: YARN-4205_01.patch, YARN-4205_02.patch, 
> YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-02-11 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143996#comment-15143996
 ] 

Rohith Sharma K S commented on YARN-4205:
-

Hi [~nijel] the patch appears to be required to rebase to apply cleanly. Would 
you rebase the patch please?

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: nijel
> Attachments: YARN-4205_01.patch, YARN-4205_02.patch, 
> YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3813) Support Application timeout feature in YARN.

2016-02-11 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-3813:

Target Version/s: 2.9.0

> Support Application timeout feature in YARN. 
> -
>
> Key: YARN-3813
> URL: https://issues.apache.org/jira/browse/YARN-3813
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: scheduler
>Reporter: nijel
>Assignee: nijel
> Attachments: 0001-YARN-3813.patch, 0002_YARN-3813.patch, YARN 
> Application Timeout .pdf
>
>
> It will be useful to support Application Timeout in YARN. Some use cases are 
> not worried about the output of the applications if the application is not 
> completed in a specific time. 
> *Background:*
> The requirement is to show the CDR statistics of last few  minutes, say for 
> every 5 minutes. The same Job will run continuously with different dataset.
> So one job will be started in every 5 minutes. The estimate time for this 
> task is 2 minutes or lesser time. 
> If the application is not completing in the given time the output is not 
> useful.
> *Proposal*
> So idea is to support application timeout, with which timeout parameter is 
> given while submitting the job. 
> Here, user is expecting to finish (complete or kill) the application in the 
> given time.
> One option for us is to move this logic to Application client (who submit the 
> job). 
> But it will be nice if it can be generic logic and can make more robust.
> Kindly provide your suggestions/opinion on this feature. If it sounds good, i 
> will update the design doc and prototype patch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2266) Add an application timeout service in RM to kill applications which are not getting resources

2016-02-11 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143985#comment-15143985
 ] 

Rohith Sharma K S commented on YARN-2266:
-

Apologies for not observing this JIRA before creating YARN-3813. Both the 
JIRA's are intended with same use case. There are some progress in YARN-3813 
along with POC patch, so we can continue discussion in YARN-3813. 

> Add an application timeout service in RM to kill applications which are not 
> getting resources
> -
>
> Key: YARN-2266
> URL: https://issues.apache.org/jira/browse/YARN-2266
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Ashutosh Jindal
>
> Currently , If an application is submitted to RM, the app keeps waiting until 
> the resources are allocated for AM. Such an application may be stuck till a 
> resource is allocated for AM, and this may be due to over utilization of 
> Queue or User limits etc. In a production cluster, some periodic running 
> applications may have lesser cluster share. So after waiting for some time, 
> if resources are not available, such applications can be made as failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4689) FairScheduler: Cleanup preemptContainer to be more readable

2016-02-11 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-4689:
--

 Summary: FairScheduler: Cleanup preemptContainer to be more 
readable
 Key: YARN-4689
 URL: https://issues.apache.org/jira/browse/YARN-4689
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Reporter: Karthik Kambatla
Priority: Trivial


In FairScheduler#preemptContainer, we check if a queue is preemptable. The code 
there could be cleaner if we don't use continue, but just the plain old 
if-else. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4062) Add the flush and compaction functionality via coprocessors and scanners for flow run table

2016-02-11 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143909#comment-15143909
 ] 

Joep Rottinghuis commented on YARN-4062:


Been slow to make progress on review mainly due to other work taking away 
attention.
I think that in general the patch will work as written.
While going through the design again from the top down I noticed (and discussed 
with [~vrushalic] the following things:
- SUM is an aggregation operation that sums the latest value of each app in a 
flow(run) (or the latest value of each aggregation dimension in the higher 
level aggregation).
- The current MIN and MAX are actually different things. They are global mins 
and global maxes in the sense that they keep only the lowest (or highest) value 
we've ever seen by any app in the flow(run). While this is a totally valid 
thing to do, there is actually something like a MIN and MAX value for each app 
in a flow as well. What we currently call MIN and MAX should probably be called 
GLOBAL_MIN and GLOBAL_MAX (or something similar). We can then also have a min 
and max that work similar in keeping the latest value for each app (aggregation 
dimension in general) and then computes the MIN and MAX at read-time. The flush 
compaction then works the same for MIN, MAX, and SUM, while for GLOBAL_MIN, and 
GLOBAL_MAX we can keep the current code behavior of shedding values as we go.

The GLOBAL_MIN and max are appropriate for the existing use-case of min start 
time, but also for gauges. The new MIN and MAX would be appropriate to answer 
questions such as, what is the app with smallest number of mappers in this 
flow, or rather what is that #?

With this realization also came the awareness that SUM_FINAL is really a 
different thing then SUM, MIN, MAX, GLOBAL_MIN and GLOBAL_MAX, despite what I 
had earlier thought (and suggested). The former "this is the final value" is 
something that has to happen at write time. It has to come from the writer 
itself as an argument. Ideally the latter set of aggregation dimensions SUM, 
MIN, MAX, etc. are really set of a per-column level and shouldn't be passed 
from the client, but be instrumented by the ColumnHelper infrastructure 
instead. We should probably use a different tag value for that.
Both aggregation dimension and this "FINAL_VALUE" or whatever abbreviation we 
use are needed to determine the right thing to do for compaction. Only one 
value needs to have this final value bit / tag set.

I'll continue to try to document all of these things so that it is a bit easier 
to see visually what is going on.

> Add the flush and compaction functionality via coprocessors and scanners for 
> flow run table
> ---
>
> Key: YARN-4062
> URL: https://issues.apache.org/jira/browse/YARN-4062
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4062-YARN-2928.1.patch, 
> YARN-4062-feature-YARN-2928.01.patch, YARN-4062-feature-YARN-2928.02.patch
>
>
> As part of YARN-3901, coprocessor and scanner is being added for storing into 
> the flow_run table. It also needs a flush & compaction processing in the 
> coprocessor and perhaps a new scanner to deal with the data during flushing 
> and compaction stages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4687) Document Reservation ACLs

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143905#comment-15143905
 ] 

Hadoop QA commented on YARN-4687:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 19s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787593/YARN-4687.v1.patch |
| JIRA Issue | YARN-4687 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 378789ef5e3b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fdef0b |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Max memory used | 29MB |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/10560/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document Reservation ACLs
> -
>
> Key: YARN-4687
> URL: https://issues.apache.org/jira/browse/YARN-4687
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Sean Po
>Priority: Minor
> Attachments: YARN-4687.v1.patch
>
>
> YARN-2575 introduces ACLs for ReservationSystem. This JIRA is for adding 
> documentation on how to configure the ACLs for Capacity/Fair schedulers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2830) Add backwords compatible ContainerId.newInstance constructor for use within Tez Local Mode

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-2830:
---
Release Note:   (was: I just committed this. Thanks [~jeagles] for the 
patch and [~ozawa] for the reviews!)

> Add backwords compatible ContainerId.newInstance constructor for use within 
> Tez Local Mode
> --
>
> Key: YARN-2830
> URL: https://issues.apache.org/jira/browse/YARN-2830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: YARN-2830-v1.patch, YARN-2830-v2.patch, 
> YARN-2830-v3.patch, YARN-2830-v4.patch
>
>
> YARN-2229 modified the private unstable api for constructing. Tez uses this 
> api (shouldn't, but does) for use with Tez Local Mode. This causes a 
> NoSuchMethod error when using Tez compiled against pre-2.6. Instead I propose 
> we add the backwards compatible api since overflow is not a problem in tez 
> local mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4687) Document Reservation ACLs

2016-02-11 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-4687:
--
Attachment: YARN-4687.v1.patch

This patch YARN-4687.v1.patch documents the Reservation ACLs for the 
CapacityScheduler and the FairScheduler.

> Document Reservation ACLs
> -
>
> Key: YARN-4687
> URL: https://issues.apache.org/jira/browse/YARN-4687
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Sean Po
>Priority: Minor
> Attachments: YARN-4687.v1.patch
>
>
> YARN-2575 introduces ACLs for ReservationSystem. This JIRA is for adding 
> documentation on how to configure the ACLs for Capacity/Fair schedulers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2336) Fair scheduler REST api returns a missing '[' bracket JSON for deep queue tree

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-2336:
---
Release Note:   (was: This incompatible change should be fixed on branch-2 
because the API is broken in branch-2. )

> Fair scheduler REST api returns a missing '[' bracket JSON for deep queue tree
> --
>
> Key: YARN-2336
> URL: https://issues.apache.org/jira/browse/YARN-2336
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.4.1, 2.6.0
>Reporter: Kenji Kikushima
>Assignee: Akira AJISAKA
>  Labels: BB2015-05-RFC
> Fix For: 2.8.0
>
> Attachments: YARN-2336-2.patch, YARN-2336-3.patch, YARN-2336-4.patch, 
> YARN-2336.005.patch, YARN-2336.007.patch, YARN-2336.008.patch, 
> YARN-2336.009.patch, YARN-2336.009.patch, YARN-2336.patch
>
>
> When we have sub queues in Fair Scheduler, REST api returns a missing '[' 
> blacket JSON for childQueues.
> This issue found by [~ajisakaa] at YARN-1050.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3587) Fix the javadoc of DelegationTokenSecretManager in projects of yarn, etc.

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-3587:
---
Release Note:   (was: Update DelegationTokenSecretManager Javadoc 
(milliseconds))

> Fix the javadoc of DelegationTokenSecretManager in projects of yarn, etc.
> -
>
> Key: YARN-3587
> URL: https://issues.apache.org/jira/browse/YARN-3587
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Gabor Liptak
>Priority: Minor
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: YARN-3587.1.patch, YARN-3587.patch
>
>
> In RMDelegationTokenSecretManager and TimelineDelegationTokenSecretManager,  
> the javadoc of the constructor is as follows:
> {code}
>   /**
>* Create a secret manager
>* @param delegationKeyUpdateInterval the number of seconds for rolling new
>*secret keys.
>* @param delegationTokenMaxLifetime the maximum lifetime of the delegation
>*tokens
>* @param delegationTokenRenewInterval how often the tokens must be renewed
>* @param delegationTokenRemoverScanInterval how often the tokens are 
> scanned
>*for expired tokens
>*/
> {code}
> 1. "the number of seconds" should be "the number of milliseconds".
> 2. It's better to add time unit to the description of other parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4683) Document the List Reservations REST API

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143854#comment-15143854
 ] 

Hadoop QA commented on YARN-4683:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 12s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787584/YARN-4683.v1.patch |
| JIRA Issue | YARN-4683 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 98b150af07aa 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fdef0b |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Max memory used | 30MB |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/10559/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Document the List Reservations REST API
> ---
>
> Key: YARN-4683
> URL: https://issues.apache.org/jira/browse/YARN-4683
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-4683.v1.patch
>
>
> YARN-4420 adds a REST API to list existing reservations in the Plan. This 
> JIRA proposes adding documentation for the REST API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2575) Create separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-2575:
-
Fix Version/s: 2.8.0

> Create separate ACLs for Reservation create/update/delete/list ops
> --
>
> Key: YARN-2575
> URL: https://issues.apache.org/jira/browse/YARN-2575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sean Po
> Fix For: 2.8.0
>
> Attachments: YARN-2575-branch-2.8.v11.patch, YARN-2575.v1.patch, 
> YARN-2575.v10.patch, YARN-2575.v11.patch, YARN-2575.v2.1.patch, 
> YARN-2575.v2.patch, YARN-2575.v3.patch, YARN-2575.v4.patch, 
> YARN-2575.v5.patch, YARN-2575.v6.patch, YARN-2575.v7.patch, 
> YARN-2575.v8.patch, YARN-2575.v9.patch
>
>
> YARN-1051 introduces the ReservationSystem and in the current implementation 
> anyone who can submit applications can also submit reservations. This JIRA is 
> to evaluate creating separate ACLs for Reservation create/update/delete ops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4577) Enable aux services to have their own custom classpath/jar file

2016-02-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143841#comment-15143841
 ] 

Sangjin Lee commented on YARN-4577:
---

My apologies [~xgong]! I must have looked at the last attachments. I'll take a 
look at it, and get back to you soon.

> Enable aux services to have their own custom classpath/jar file
> ---
>
> Key: YARN-4577
> URL: https://issues.apache.org/jira/browse/YARN-4577
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4577.1.patch, YARN-4577.2.patch, 
> YARN-4577.20160119.1.patch, YARN-4577.20160204.patch, YARN-4577.3.patch, 
> YARN-4577.3.rebase.patch, YARN-4577.4.patch
>
>
> Right now, users have to add their jars to the NM classpath directly, thus 
> put them on the system classloader. But if multiple versions of the plugin 
> are present on the classpath, there is no control over which version actually 
> gets loaded. Or if there are any conflicts between the dependencies 
> introduced by the auxiliary service and the NM itself, they can break the NM, 
> the auxiliary service, or both.
> The solution could be: to instantiate aux services using a classloader that 
> is different from the system classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4683) Document the List Reservations REST API

2016-02-11 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-4683:
--
Attachment: YARN-4683.v1.patch

YARN-4683.v1.patch documents the REST API for list reservations from YARN-4420.

> Document the List Reservations REST API
> ---
>
> Key: YARN-4683
> URL: https://issues.apache.org/jira/browse/YARN-4683
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-4683.v1.patch
>
>
> YARN-4420 adds a REST API to list existing reservations in the Plan. This 
> JIRA proposes adding documentation for the REST API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2575) Create separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Sean Po (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143803#comment-15143803
 ] 

Sean Po commented on YARN-2575:
---

Thanks [~asuresh] and [~subru] for reviewing and [~asuresh] for also committing 
this patch.

> Create separate ACLs for Reservation create/update/delete/list ops
> --
>
> Key: YARN-2575
> URL: https://issues.apache.org/jira/browse/YARN-2575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sean Po
> Attachments: YARN-2575-branch-2.8.v11.patch, YARN-2575.v1.patch, 
> YARN-2575.v10.patch, YARN-2575.v11.patch, YARN-2575.v2.1.patch, 
> YARN-2575.v2.patch, YARN-2575.v3.patch, YARN-2575.v4.patch, 
> YARN-2575.v5.patch, YARN-2575.v6.patch, YARN-2575.v7.patch, 
> YARN-2575.v8.patch, YARN-2575.v9.patch
>
>
> YARN-1051 introduces the ReservationSystem and in the current implementation 
> anyone who can submit applications can also submit reservations. This JIRA is 
> to evaluate creating separate ACLs for Reservation create/update/delete ops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2575) Create separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143719#comment-15143719
 ] 

Arun Suresh commented on YARN-2575:
---

The {{TestFifoScheduler}} failure is unrelated. Committing patch to branch-2.8 
shortly.
Thanks [~seanpo03]

> Create separate ACLs for Reservation create/update/delete/list ops
> --
>
> Key: YARN-2575
> URL: https://issues.apache.org/jira/browse/YARN-2575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sean Po
> Attachments: YARN-2575-branch-2.8.v11.patch, YARN-2575.v1.patch, 
> YARN-2575.v10.patch, YARN-2575.v11.patch, YARN-2575.v2.1.patch, 
> YARN-2575.v2.patch, YARN-2575.v3.patch, YARN-2575.v4.patch, 
> YARN-2575.v5.patch, YARN-2575.v6.patch, YARN-2575.v7.patch, 
> YARN-2575.v8.patch, YARN-2575.v9.patch
>
>
> YARN-1051 introduces the ReservationSystem and in the current implementation 
> anyone who can submit applications can also submit reservations. This JIRA is 
> to evaluate creating separate ACLs for Reservation create/update/delete ops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2575) Create separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143704#comment-15143704
 ] 

Hadoop QA commented on YARN-2575:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 46s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
29s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 29s 
{color} | {color:green} branch-2.8 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 26s 
{color} | {color:green} branch-2.8 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s 
{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
58s {color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} branch-2.8 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 9s 
{color} | {color:green} branch-2.8 passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 4 new + 
374 unchanged - 4 fixed = 378 total (was 378) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 51s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color

[jira] [Updated] (YARN-1904) Uniform the XXXXNotFound messages from ClientRMService and ApplicationHistoryClientService

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-1904:
---
Release Note:   (was: I just committed this. Thanks Zhijie!)

> Uniform the NotFound messages from ClientRMService and 
> ApplicationHistoryClientService
> --
>
> Key: YARN-1904
> URL: https://issues.apache.org/jira/browse/YARN-1904
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
> Fix For: 2.7.0
>
> Attachments: YARN-1904.1.patch
>
>
> It's good to make ClientRMService and ApplicationHistoryClientService throw 
> NotFoundException with similar messages



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2230) Fix description of yarn.scheduler.maximum-allocation-vcores in yarn-default.xml (or code)

2016-02-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-2230:
---
Release Note:   (was: I have modified the description of the 
yarn.scheduler.maximum-allocation-vcores setting in yarn-default.xml to be 
reflective of the actual behavior (throw InvalidRequestException when the limit 
is crossed).

Since this is a documentation change, I have not added any test cases.

Please review the patch, thanks!)

> Fix description of yarn.scheduler.maximum-allocation-vcores in 
> yarn-default.xml (or code)
> -
>
> Key: YARN-2230
> URL: https://issues.apache.org/jira/browse/YARN-2230
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, documentation, scheduler
>Affects Versions: 2.4.0
>Reporter: Adam Kawa
>Assignee: Vijay Bhat
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: YARN-2230.001.patch, YARN-2230.002.patch
>
>
> When a user requests more vcores than the allocation limit (e.g. 
> mapreduce.map.cpu.vcores  is larger than 
> yarn.scheduler.maximum-allocation-vcores), then 
> InvalidResourceRequestException is thrown - 
> https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerUtils.java
> {code}
> if (resReq.getCapability().getVirtualCores() < 0 ||
> resReq.getCapability().getVirtualCores() >
> maximumResource.getVirtualCores()) {
>   throw new InvalidResourceRequestException("Invalid resource request"
>   + ", requested virtual cores < 0"
>   + ", or requested virtual cores > max configured"
>   + ", requestedVirtualCores="
>   + resReq.getCapability().getVirtualCores()
>   + ", maxVirtualCores=" + maximumResource.getVirtualCores());
> }
> {code}
> According to documentation - yarn-default.xml 
> http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml,
>  the request should be capped to the allocation limit.
> {code}
>   
> The maximum allocation for every container request at the RM,
> in terms of virtual CPU cores. Requests higher than this won't take 
> effect,
> and will get capped to this value.
> yarn.scheduler.maximum-allocation-vcores
> 32
>   
> {code}
> This means that:
> * Either documentation or code should be corrected (unless this exception is 
> handled elsewhere accordingly, but it looks that it is not).
> This behavior is confusing, because when such a job (with 
> mapreduce.map.cpu.vcores is larger than 
> yarn.scheduler.maximum-allocation-vcores) is submitted, it does not make any 
> progress. The warnings/exceptions are thrown at the scheduler (RM) side e.g.
> {code}
> 2014-06-29 00:34:51,469 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: 
> Invalid resource ask by application appattempt_1403993411503_0002_01
> org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid 
> resource request, requested virtual cores < 0, or requested virtual cores > 
> max configured, requestedVirtualCores=32, maxVirtualCores=3
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:237)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateResourceRequests(RMServerUtils.java:80)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:420)
> .
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:416)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)
> {code}
> * IMHO, such an exception should be forwarded to client. Otherwise, it is non 
> obvious to discover why a job does not make any progress.
> The same looks to be related to memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4648) Move preemption related tests from TestFairScheduler to TestFairSchedulerPreemption

2016-02-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143676#comment-15143676
 ] 

Tsuyoshi Ozawa commented on YARN-4648:
--

[~lewuathe], sure, I'll check this on this weekend.

> Move preemption related tests from TestFairScheduler to 
> TestFairSchedulerPreemption
> ---
>
> Key: YARN-4648
> URL: https://issues.apache.org/jira/browse/YARN-4648
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Kai Sasaki
>  Labels: newbie++
> Attachments: YARN-4648.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4676) Automatic and Asynchronous Decommissioning Nodes Status Tracking

2016-02-11 Thread Daniel Zhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Zhi updated YARN-4676:
-
Attachment: HADOOP-4676.004.patch

> Automatic and Asynchronous Decommissioning Nodes Status Tracking
> 
>
> Key: YARN-4676
> URL: https://issues.apache.org/jira/browse/YARN-4676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Zhi
>Assignee: Daniel Zhi
>  Labels: features
> Fix For: 2.8.0
>
> Attachments: GracefulDecommissionYarnNode.pdf, HADOOP-4676.003.patch, 
> HADOOP-4676.004.patch
>
>
> DecommissioningNodeWatcher inside ResourceTrackingService tracks 
> DECOMMISSIONING nodes status automatically and asynchronously after 
> client/admin made the graceful decommission request. It tracks 
> DECOMMISSIONING nodes status to decide when, after all running containers on 
> the node have completed, will be transitioned into DECOMMISSIONED state. 
> NodesListManager detect and handle include and exclude list changes to kick 
> out decommission or recommission as necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4688) Allow YARN system metrics publisher to use ATS v1.5 APIs

2016-02-11 Thread Li Lu (JIRA)
Li Lu created YARN-4688:
---

 Summary: Allow YARN system metrics publisher to use ATS v1.5 APIs
 Key: YARN-4688
 URL: https://issues.apache.org/jira/browse/YARN-4688
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu


We may want to consider to use ATS v1.5 APIs for system metrics publisher. 
There are some contributions from the ATS v2 branch that refactors the YARN SMP 
to allow it work with multiple versions. We may also need to consider merge in 
this change. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1515) Provide ContainerManagementProtocol#signalContainer processing a batch of signals

2016-02-11 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143523#comment-15143523
 ] 

Eric Payne commented on YARN-1515:
--

Hi [~jira.shegalov]. I would like to see this functionality implemented. We 
occasionally see containers time out, and it would be good if users could have 
direct feedback in the form of a jstack to help them debug their applications.

I have been coming up to speed on the work that's already been committed in 
this area under YARN-445 and its children. IIUC, YARN-445 and its children put 
in place the infrastructure for a {{Client -> RM -> NM -> Container}} signal 
path. On the other hand, this JIRA (along with YARN-1515) implements an {{AM -> 
NM -> Container}} signal path and the ability to send multiple signals per call.

It seems that these pieces could possibly be split into separate JIRAs. Either 
way, I think that a lot of what has been done in this JIRA could be used to add 
the interface to {{ContainerManagementProtocol}} that would allow the AM to 
prompt the NM to signal the container to dump its stack prior to killing the 
container on a timeout.

Is there a possibility that this JIRA will move forward? Ideally, we would like 
it all ported back to 2.7.

> Provide ContainerManagementProtocol#signalContainer processing a batch of 
> signals 
> --
>
> Key: YARN-1515
> URL: https://issues.apache.org/jira/browse/YARN-1515
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, nodemanager
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: YARN-1515.v01.patch, YARN-1515.v02.patch, 
> YARN-1515.v03.patch, YARN-1515.v04.patch, YARN-1515.v05.patch, 
> YARN-1515.v06.patch, YARN-1515.v07.patch, YARN-1515.v08.patch
>
>
> This is needed to implement MAPREDUCE-5044 to enable thread diagnostics for 
> timed-out task attempts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143450#comment-15143450
 ] 

Hadoop QA commented on YARN-4624:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 54s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 21s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 159m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12787067/YARN-2674-002.pa

[jira] [Updated] (YARN-2575) Create separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-2575:
--
Attachment: YARN-2575-branch-2.8.v11.patch

[~asuresh], thanks for reviewing. YARN-2575-branch-2.8.v11.patch is the patch 
that applies for branch-2.8.

> Create separate ACLs for Reservation create/update/delete/list ops
> --
>
> Key: YARN-2575
> URL: https://issues.apache.org/jira/browse/YARN-2575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sean Po
> Attachments: YARN-2575-branch-2.8.v11.patch, YARN-2575.v1.patch, 
> YARN-2575.v10.patch, YARN-2575.v11.patch, YARN-2575.v2.1.patch, 
> YARN-2575.v2.patch, YARN-2575.v3.patch, YARN-2575.v4.patch, 
> YARN-2575.v5.patch, YARN-2575.v6.patch, YARN-2575.v7.patch, 
> YARN-2575.v8.patch, YARN-2575.v9.patch
>
>
> YARN-1051 introduces the ReservationSystem and in the current implementation 
> anyone who can submit applications can also submit reservations. This JIRA is 
> to evaluate creating separate ACLs for Reservation create/update/delete ops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)

2016-02-11 Thread Ishai Menache (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143392#comment-15143392
 ] 

Ishai Menache commented on YARN-4525:
-

Added the patch that fixes this issue. [~curino] please review 

> Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
> ---
>
> Key: YARN-4525
> URL: https://issues.apache.org/jira/browse/YARN-4525
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ishai Menache
>Assignee: Ishai Menache
> Attachments: YARN-4525.patch
>
>
> One of our tests detected a corner case in getRangeOverlapping: When the 
> RLESparseResourceAllocation object is a result of a merge operation, the 
> underlying map is a "view" within some range. If  'end' is outside that 
> range, headMap(..) throws an uncaught exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4525) Bug in RLESparseResourceAllocation.getRangeOverlapping(...)

2016-02-11 Thread Ishai Menache (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishai Menache updated YARN-4525:

Attachment: YARN-4525.patch

> Bug in RLESparseResourceAllocation.getRangeOverlapping(...)
> ---
>
> Key: YARN-4525
> URL: https://issues.apache.org/jira/browse/YARN-4525
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Ishai Menache
>Assignee: Ishai Menache
> Attachments: YARN-4525.patch
>
>
> One of our tests detected a corner case in getRangeOverlapping: When the 
> RLESparseResourceAllocation object is a result of a merge operation, the 
> underlying map is a "view" within some range. If  'end' is outside that 
> range, headMap(..) throws an uncaught exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2575) Create separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143296#comment-15143296
 ] 

Hudson commented on YARN-2575:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9285 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9285/])
YARN-2575. Create separate ACLs for Reservation (arun suresh: rev 
23f937e3b718f607d4fc975610ab3a03265f0f7e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ReservationACL.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationSchedulerConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/ReservationACLsTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/ACLsTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/ReservationsACLsManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/PlanView.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationSystem.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/CapacityOverTimePolicy.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/InMemoryPlan.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/QueueACLsTestBase.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/AbstractReservationSystem.java


> Create separate ACLs for Reservation create/update/delete/list ops
> --
>
> Key: YARN-2575
> URL: https://issues.apache.org/jira/browse/YARN-2575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sean Po
> Attachments: YARN-2575.v1.patch, YARN-2575.v10.patch, 
> YARN-2575.v11.patch, YARN-2575.v2.1.patch, YARN-2575.v2.patch, 
> YARN-2575.v3.patch, YARN-2575.v4.patch, YARN-2575.v5.patch, 
> YARN-2575.v6.patch, YARN-2575.v7.patch, YARN-2575.v8.patch, YARN-2575.v9.patch
>
>
> YARN-1051 introduces the ReservationSystem and in the current implementation 
> anyone who can submit applications can also submit reservations. This JIRA is 
> to evaluate creating separate ACLs for Reservation create/update/delete ops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2575) Create separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143294#comment-15143294
 ] 

Arun Suresh commented on YARN-2575:
---

Committed totrunk and branch-2, but the patch did not apply correctly onto 
branch-2.8.. [~seanpo03] will be uploading one that works.. will commit that 
shortly after

> Create separate ACLs for Reservation create/update/delete/list ops
> --
>
> Key: YARN-2575
> URL: https://issues.apache.org/jira/browse/YARN-2575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sean Po
> Attachments: YARN-2575.v1.patch, YARN-2575.v10.patch, 
> YARN-2575.v11.patch, YARN-2575.v2.1.patch, YARN-2575.v2.patch, 
> YARN-2575.v3.patch, YARN-2575.v4.patch, YARN-2575.v5.patch, 
> YARN-2575.v6.patch, YARN-2575.v7.patch, YARN-2575.v8.patch, YARN-2575.v9.patch
>
>
> YARN-1051 introduces the ReservationSystem and in the current implementation 
> anyone who can submit applications can also submit reservations. This JIRA is 
> to evaluate creating separate ACLs for Reservation create/update/delete ops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2575) Create separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2575:
--
Summary: Create separate ACLs for Reservation create/update/delete/list ops 
 (was: Consider creating separate ACLs for Reservation 
create/update/delete/list ops)

> Create separate ACLs for Reservation create/update/delete/list ops
> --
>
> Key: YARN-2575
> URL: https://issues.apache.org/jira/browse/YARN-2575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sean Po
> Attachments: YARN-2575.v1.patch, YARN-2575.v10.patch, 
> YARN-2575.v11.patch, YARN-2575.v2.1.patch, YARN-2575.v2.patch, 
> YARN-2575.v3.patch, YARN-2575.v4.patch, YARN-2575.v5.patch, 
> YARN-2575.v6.patch, YARN-2575.v7.patch, YARN-2575.v8.patch, YARN-2575.v9.patch
>
>
> YARN-1051 introduces the ReservationSystem and in the current implementation 
> anyone who can submit applications can also submit reservations. This JIRA is 
> to evaluate creating separate ACLs for Reservation create/update/delete ops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2575) Consider creating separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143236#comment-15143236
 ] 

Arun Suresh commented on YARN-2575:
---

+1
Thanks for all the work on this [~seanpo03] and for the reviews [~subru].
Will commit this shortly

> Consider creating separate ACLs for Reservation create/update/delete/list ops
> -
>
> Key: YARN-2575
> URL: https://issues.apache.org/jira/browse/YARN-2575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sean Po
> Attachments: YARN-2575.v1.patch, YARN-2575.v10.patch, 
> YARN-2575.v11.patch, YARN-2575.v2.1.patch, YARN-2575.v2.patch, 
> YARN-2575.v3.patch, YARN-2575.v4.patch, YARN-2575.v5.patch, 
> YARN-2575.v6.patch, YARN-2575.v7.patch, YARN-2575.v8.patch, YARN-2575.v9.patch
>
>
> YARN-1051 introduces the ReservationSystem and in the current implementation 
> anyone who can submit applications can also submit reservations. This JIRA is 
> to evaluate creating separate ACLs for Reservation create/update/delete ops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4577) Enable aux services to have their own custom classpath/jar file

2016-02-11 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143207#comment-15143207
 ] 

Xuan Gong commented on YARN-4577:
-

[~sjlee0] Thanks for the review. Looks like we reviewed a incorrect patch.

https://issues.apache.org/jira/secure/attachment/12786365/YARN-4577.20160204.patch
 is correct one..

Sorry for the inconsistent naming of the patch. Could you review the patch ?

> Enable aux services to have their own custom classpath/jar file
> ---
>
> Key: YARN-4577
> URL: https://issues.apache.org/jira/browse/YARN-4577
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4577.1.patch, YARN-4577.2.patch, 
> YARN-4577.20160119.1.patch, YARN-4577.20160204.patch, YARN-4577.3.patch, 
> YARN-4577.3.rebase.patch, YARN-4577.4.patch
>
>
> Right now, users have to add their jars to the NM classpath directly, thus 
> put them on the system classloader. But if multiple versions of the plugin 
> are present on the classpath, there is no control over which version actually 
> gets loaded. Or if there are any conflicts between the dependencies 
> introduced by the auxiliary service and the NM itself, they can break the NM, 
> the auxiliary service, or both.
> The solution could be: to instantiate aux services using a classloader that 
> is different from the system classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-02-11 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143175#comment-15143175
 ] 

Devaraj K commented on YARN-4624:
-

Thanks [~brahmareddy] for the updated patch.

{code:xml}
+  capacities.getMaxAMLimitPercentage() == 0
+ ? 0 : capacities.getMaxAMLimitPercentage())).
{code}

Don't we need to check for null instead of 0 here? Please verify the scenario 
with the patch changes.

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: YARN-2674-002.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2266) Add an application timeout service in RM to kill applications which are not getting resources

2016-02-11 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143169#comment-15143169
 ] 

Devaraj K commented on YARN-2266:
-

Duplicate of YARN-3813

> Add an application timeout service in RM to kill applications which are not 
> getting resources
> -
>
> Key: YARN-2266
> URL: https://issues.apache.org/jira/browse/YARN-2266
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Ashutosh Jindal
>
> Currently , If an application is submitted to RM, the app keeps waiting until 
> the resources are allocated for AM. Such an application may be stuck till a 
> resource is allocated for AM, and this may be due to over utilization of 
> Queue or User limits etc. In a production cluster, some periodic running 
> applications may have lesser cluster share. So after waiting for some time, 
> if resources are not available, such applications can be made as failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4686) MiniYARNCluster.start() returns before cluster is completely started

2016-02-11 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143158#comment-15143158
 ] 

Karthik Kambatla commented on YARN-4686:


It makes sense to ensure the mini-cluster start doesn't return until the 
cluster has actually fully started. I am comfortable with transitioning one of 
the RMs to active and updating the HA tests accordingly. 

I don't expect tests outside of Yarn/MR to depend on HA nature of the cluster. 
Also, MiniYarnCluster is not marked Public-Stable yet. Should we just go ahead 
and mark the constructor that allows multiple RMs Private? 

> MiniYARNCluster.start() returns before cluster is completely started
> 
>
> Key: YARN-4686
> URL: https://issues.apache.org/jira/browse/YARN-4686
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Eric Badger
> Attachments: MAPREDUCE-6507.001.patch
>
>
> TestRMNMInfo fails intermittently. Below is trace for the failure
> {noformat}
> testRMNMInfo(org.apache.hadoop.mapreduce.v2.TestRMNMInfo)  Time elapsed: 0.28 
> sec  <<< FAILURE!
> java.lang.AssertionError: Unexpected number of live nodes: expected:<4> but 
> was:<3>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.mapreduce.v2.TestRMNMInfo.testRMNMInfo(TestRMNMInfo.java:111)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2575) Consider creating separate ACLs for Reservation create/update/delete/list ops

2016-02-11 Thread Sean Po (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143145#comment-15143145
 ] 

Sean Po commented on YARN-2575:
---

I double checked the 
hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA test failure 
by running the test locally with the patch applied on trunk and the test 
passed. YARN-4312 documents this test failure in the past, the fix is not 
currently applied in trunk.

> Consider creating separate ACLs for Reservation create/update/delete/list ops
> -
>
> Key: YARN-2575
> URL: https://issues.apache.org/jira/browse/YARN-2575
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sean Po
> Attachments: YARN-2575.v1.patch, YARN-2575.v10.patch, 
> YARN-2575.v11.patch, YARN-2575.v2.1.patch, YARN-2575.v2.patch, 
> YARN-2575.v3.patch, YARN-2575.v4.patch, YARN-2575.v5.patch, 
> YARN-2575.v6.patch, YARN-2575.v7.patch, YARN-2575.v8.patch, YARN-2575.v9.patch
>
>
> YARN-1051 introduces the ReservationSystem and in the current implementation 
> anyone who can submit applications can also submit reservations. This JIRA is 
> to evaluate creating separate ACLs for Reservation create/update/delete ops.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-02-11 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143140#comment-15143140
 ] 

Brahma Reddy Battula commented on YARN-4624:


[~sunilg] and [~devaraj.k] kindly review the patch.. thanks..

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: YARN-2674-002.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-4686) MiniYARNCluster.start() returns before cluster is completely started

2016-02-11 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-4686:


Assignee: Eric Badger

I'd really like to see the minicluster not startup by default with a race 
condition where it hasn't actually finished starting.  With multiple tests 
currently failing sporadically due to this, I'd like the start() method to not 
return until the cluster is started.  For non-HA setups this seems very 
straightforward.

However for the HA minicluster it appears the intent is to have the RMs all 
come up in standby.  The problem is that the NM start method _will not return_ 
until it has successfully registered with an RM.  Since all RMs are in standby 
the NM start never completes, the minicluster start never completes, and we 
never get to the part of the test where it activates an RM.  Therefore HA 
minicluster tests will always timeout.

I like Eric's proposal to have the minicluster activate the first RM during the 
start method of an HA cluster so we can bring it up and return from the cluster 
start method with no pending start processing (and therefore race conditions in 
the test using the cluster).  However that could break some of the assumptions 
of those using the HA minicluster in their existing tests.  For Hadoop tests we 
can simply fixup the tests accordingly, if necessary (since most seem to 
activate the first one anyway), but I don't know if there are other tests that 
use an HA minicluster and will break if the first RM is already active by 
default.

[~kasha] do you have an opinion on this?

> MiniYARNCluster.start() returns before cluster is completely started
> 
>
> Key: YARN-4686
> URL: https://issues.apache.org/jira/browse/YARN-4686
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Eric Badger
> Attachments: MAPREDUCE-6507.001.patch
>
>
> TestRMNMInfo fails intermittently. Below is trace for the failure
> {noformat}
> testRMNMInfo(org.apache.hadoop.mapreduce.v2.TestRMNMInfo)  Time elapsed: 0.28 
> sec  <<< FAILURE!
> java.lang.AssertionError: Unexpected number of live nodes: expected:<4> but 
> was:<3>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.mapreduce.v2.TestRMNMInfo.testRMNMInfo(TestRMNMInfo.java:111)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4682) AMRM client to log when AMRM token updated

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15143066#comment-15143066
 ] 

Hadoop QA commented on YARN-4682:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 28s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 43s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 142m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.yarn.client.TestGetGroups |
| JDK v1.8.0_72 Timed out junit tests | 
org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestNMClient |
| JDK v1.7.0_95 Failed junit tests | hadoop.yarn.client.TestGetGroups |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.ap

[jira] [Commented] (YARN-4501) Document new put APIs in TimelineClient for ATS 1.5

2016-02-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142817#comment-15142817
 ] 

Steve Loughran commented on YARN-4501:
--

s/give/r/given/

> Document new put APIs in TimelineClient for ATS 1.5
> ---
>
> Key: YARN-4501
> URL: https://issues.apache.org/jira/browse/YARN-4501
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Junping Du
>Assignee: Xuan Gong
>
> In YARN-4234, we are adding new put APIs in TimelineClient, we should 
> document it properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4501) Document new put APIs in TimelineClient for ATS 1.5

2016-02-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142815#comment-15142815
 ] 

Steve Loughran commented on YARN-4501:
--

give the imminent release of Hadoop 2.8, are there any drafts of this available 
yet?

> Document new put APIs in TimelineClient for ATS 1.5
> ---
>
> Key: YARN-4501
> URL: https://issues.apache.org/jira/browse/YARN-4501
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Junping Du
>Assignee: Xuan Gong
>
> In YARN-4234, we are adding new put APIs in TimelineClient, we should 
> document it properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4684) TestYarnCLI#testGetContainers failing in CN locale

2016-02-11 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-4684:

Summary: TestYarnCLI#testGetContainers failing in CN locale  (was: 
TestYarnCLI#testGetContainers failing in trunk )

> TestYarnCLI#testGetContainers failing in CN locale
> --
>
> Key: YARN-4684
> URL: https://issues.apache.org/jira/browse/YARN-4684
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4684.patch, 0002-YARN-4684.patch
>
>
> TestYarnCLI#testGetContainers failing in CN locale 
> {noformat}
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03?? ??? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - 
> OutputFrom command
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4684) TestYarnCLI#testGetContainers failing in trunk

2016-02-11 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142731#comment-15142731
 ] 

Varun Vasudev commented on YARN-4684:
-

+1. I'll commit this tomorrow if no one objects.

> TestYarnCLI#testGetContainers failing in trunk 
> ---
>
> Key: YARN-4684
> URL: https://issues.apache.org/jira/browse/YARN-4684
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4684.patch, 0002-YARN-4684.patch
>
>
> TestYarnCLI#testGetContainers failing in CN locale 
> {noformat}
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03?? ??? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - 
> OutputFrom command
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4684) TestYarnCLI#testGetContainers failing in trunk

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142660#comment-15142660
 ] 

Hadoop QA commented on YARN-4684:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 32s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 45s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 142m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.yarn.client.TestGetGroups |
| JDK v1.8.0_72 Timed out junit tests | 
org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestNMClient |
| JDK v1.7.0_95 Failed junit tests | hadoop.yarn.client.TestGetGroups |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache

[jira] [Commented] (YARN-2885) Create AMRMProxy request interceptor for distributed scheduling decisions for queueable containers

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142636#comment-15142636
 ] 

Hadoop QA commented on YARN-2885:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
18s {color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} yarn-2877 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} yarn-2877 passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 40s 
{color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
9s {color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
15s {color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} yarn-2877 passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 52s 
{color} | {color:green} yarn-2877 passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 148 new 
+ 379 unchanged - 1 fixed = 527 total (was 380) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
45s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed with 
JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 9m 24s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.7.0_91
 with JDK v1.7.0_91 generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_66. 

[jira] [Commented] (YARN-2885) Create AMRMProxy request interceptor for distributed scheduling decisions for queueable containers

2016-02-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142632#comment-15142632
 ] 

Hadoop QA commented on YARN-2885:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 52s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
52s {color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s 
{color} | {color:green} yarn-2877 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s 
{color} | {color:green} yarn-2877 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s 
{color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
9s {color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
53s {color} | {color:green} yarn-2877 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s 
{color} | {color:green} yarn-2877 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 31s 
{color} | {color:green} yarn-2877 passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 147 new 
+ 377 unchanged - 1 fixed = 524 total (was 378) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
45s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed with 
JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 8m 7s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdk1.7.0_95
 with JDK v1.7.0_95 generated 4 new + 1 unchanged - 0 fixed = 5 total (was 1) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_72. 
{colo

[jira] [Updated] (YARN-4682) AMRM client to log when AMRM token updated

2016-02-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-4682:
-
Attachment: YARN-4682-002.patch

the issue is just that the lines of code above your patch had changed, so the 
diff wouldn't take. This is the patch with that changed; if jenkins/yetus is 
happy, i'll commit it

> AMRM client to log when AMRM token updated
> --
>
> Key: YARN-4682
> URL: https://issues.apache.org/jira/browse/YARN-4682
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
> Attachments: YARN-4682-002.patch, YARN-4682.patch, YARN-4682.patch.1
>
>   Original Estimate: 0.25h
>  Remaining Estimate: 0.25h
>
> There's no information right now as to when the AMRM token gets updated; if 
> something has gone wrong with the update, you can't tell when it last when 
> through.
> fix: add a log statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4684) TestYarnCLI#testGetContainers failing in trunk

2016-02-11 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142516#comment-15142516
 ] 

Bibin A Chundatt commented on YARN-4684:


[~vvasudev]

Thank you for review. Attaching patch after handling comments.
charset=utf-8 is set for stream and toString since in CN locale the date will 
be shown in chinese

> TestYarnCLI#testGetContainers failing in trunk 
> ---
>
> Key: YARN-4684
> URL: https://issues.apache.org/jira/browse/YARN-4684
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4684.patch, 0002-YARN-4684.patch
>
>
> TestYarnCLI#testGetContainers failing in CN locale 
> {noformat}
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03?? ??? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - 
> OutputFrom command
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4684) TestYarnCLI#testGetContainers failing in trunk

2016-02-11 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4684:
---
Attachment: 0002-YARN-4684.patch

> TestYarnCLI#testGetContainers failing in trunk 
> ---
>
> Key: YARN-4684
> URL: https://issues.apache.org/jira/browse/YARN-4684
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4684.patch, 0002-YARN-4684.patch
>
>
> TestYarnCLI#testGetContainers failing in CN locale 
> {noformat}
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03?? ??? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - 
> OutputFrom command
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4684) TestYarnCLI#testGetContainers failing in trunk

2016-02-11 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142398#comment-15142398
 ] 

Varun Vasudev commented on YARN-4684:
-

Thanks for the patch [~bibinchundatt]. One minor fix - instead of calling 
{code} sysOutStream.toString("UTF-8") {code} in  {code} +
Assert.assertEquals(appReportStr, sysOutStream.toString("UTF-8")); {code} can 
you use {code} actualOutput {code} which you create earlier in {code} +
String actualOutput = sysOutStream.toString("UTF-8"); {code} ?

> TestYarnCLI#testGetContainers failing in trunk 
> ---
>
> Key: YARN-4684
> URL: https://issues.apache.org/jira/browse/YARN-4684
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-4684.patch
>
>
> TestYarnCLI#testGetContainers failing in CN locale 
> {noformat}
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02?? ??? 01 08:00:01 +0800 1970   
> ?? ??? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03?? ??? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - 
> OutputFrom command
> 2016-02-10 12:32:24,309 INFO  mortbay.log (Slf4jLog.java:info(67)) - [Total 
> number of containers :3
>   Container-Id  Start Time Finish 
> Time   StateHost   Node Http Address  
>   LOG-URL
>  container_1234_0005_01_01鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_02鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
> 鏄熸湡鍥? 涓?鏈? 01 08:00:05 +0800 1970   COMPLETE   
> host:1234http://host:2345 logURL
>  container_1234_0005_01_03鏄熸湡鍥? 涓?鏈? 01 08:00:01 +0800 1970   
>  N/A RUNNING   host:1234
> http://host:2345   
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2266) Add an application timeout service in RM to kill applications which are not getting resources

2016-02-11 Thread Sudip Hazra Choudhury (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15142399#comment-15142399
 ] 

Sudip Hazra Choudhury commented on YARN-2266:
-

Surely, we are interested in this feature. It would be very helpful.

This feature can have a default value of 0 (infinite) and others should be able 
to set non-zero value depending on the requirement.

> Add an application timeout service in RM to kill applications which are not 
> getting resources
> -
>
> Key: YARN-2266
> URL: https://issues.apache.org/jira/browse/YARN-2266
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Ashutosh Jindal
>
> Currently , If an application is submitted to RM, the app keeps waiting until 
> the resources are allocated for AM. Such an application may be stuck till a 
> resource is allocated for AM, and this may be due to over utilization of 
> Queue or User limits etc. In a production cluster, some periodic running 
> applications may have lesser cluster share. So after waiting for some time, 
> if resources are not available, such applications can be made as failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2885) Create AMRMProxy request interceptor for distributed scheduling decisions for queueable containers

2016-02-11 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2885:
--
Attachment: YARN-2885-yarn-2877.v9.patch

Rebasing patch against the branch

> Create AMRMProxy request interceptor for distributed scheduling decisions for 
> queueable containers
> --
>
> Key: YARN-2885
> URL: https://issues.apache.org/jira/browse/YARN-2885
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
> Attachments: YARN-2885-yarn-2877.001.patch, 
> YARN-2885-yarn-2877.002.patch, YARN-2885-yarn-2877.full-2.patch, 
> YARN-2885-yarn-2877.full-3.patch, YARN-2885-yarn-2877.full.patch, 
> YARN-2885-yarn-2877.v4.patch, YARN-2885-yarn-2877.v5.patch, 
> YARN-2885-yarn-2877.v6.patch, YARN-2885-yarn-2877.v7.patch, 
> YARN-2885-yarn-2877.v8.patch, YARN-2885-yarn-2877.v9.patch, 
> YARN-2885_api_changes.patch
>
>
> We propose to add a Local ResourceManager (LocalRM) to the NM in order to 
> support distributed scheduling decisions. 
> Architecturally we leverage the RMProxy, introduced in YARN-2884. 
> The LocalRM makes distributed decisions for queuable containers requests. 
> Guaranteed-start requests are still handled by the central RM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)