[jira] [Commented] (YARN-6184) Introduce loading icon in each page of new YARN UI

2017-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877606#comment-15877606
 ] 

Hudson commented on YARN-6184:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11285 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11285/])
YARN-6184. Introduce loading icon in each page of new YARN UI. (sunilg: rev 
f1c9cafefc1940211b9fa0b77d2997ddb589af4e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-nodes.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-queue-apps-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/router.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/assets/images/spinner.gif
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-apps/loading.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue/info.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue-apps.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue-apps.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-queue-apps-test.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue-apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/apps.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-queue/info-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-queue/apps-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue/apps.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/loading.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/cluster-overview.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/info.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queues.js


> Introduce loading icon in each page of new YARN UI
> --
>
> Key: YARN-6184
> URL: https://issues.apache.org/jira/browse/YARN-6184
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6184.001.patch, YARN-6184.002.patch, 
> YARN-6184.003.patch, YARN-6184.004.patch, YARN-6184.005.patch, 
> YARN-6184.006.patch
>
>
> Add loading icon in each page in new YARN-UI. This will help in cases where 
> we download large data to client side.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3884) App History status not updated when RMContainer transitions from RESERVED to KILLED

2017-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877581#comment-15877581
 ] 

Sunil G commented on YARN-3884:
---

Thanks [~leftnoteasy] and [~rohithsharma]

I think it makes sense to me as well. Atmost user/might might want to know the 
staleReservedContainers of an app. Hence if we have an attempt/app metric to 
have staleReservedContainers count, we should be good in that angle. This will 
clear out the need of publishing such containers to ATS as well. hence Wangda's 
approach will help to make this more simpler. [~bibinchundatt] [~varun_saxena] 
Thoughts?

> App History status not updated when RMContainer transitions from RESERVED to 
> KILLED
> ---
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3884.0002.patch, 
> YARN-3884.0003.patch, YARN-3884.0004.patch, YARN-3884.0005.patch, 
> YARN-3884.0006.patch, YARN-3884.0007.patch, YARN-3884.0008.patch
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1435849994778_0001_01 
> container=Container: [ContainerId: 
> container_e24_1435849994778_0001_01_14, NodeId: host-10-19-92-117:64318, 
> NodeHttpAddress: host-10-19-92-117:65321, Resource: , 
> Priority: 20, Token: null, ] queue=default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.0846906, absoluteUsedCapacity=0.4166, numApps=1, 
> numContainers=5 clusterResource=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.default stats: default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.5016286, absoluteUsedCapacity=0.5, numApps=1, numContainers=6
> 2015-07-02 20:45:31,191 INFO 
> 

[jira] [Created] (YARN-6214) NullPointer Exception while querying timeline server API

2017-02-21 Thread Ravi Teja Ch N V (JIRA)
Ravi Teja Ch N V created YARN-6214:
--

 Summary: NullPointer Exception while querying timeline server API
 Key: YARN-6214
 URL: https://issues.apache.org/jira/browse/YARN-6214
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Affects Versions: 2.7.1
Reporter: Ravi Teja Ch N V


The apps API works fine and give all applications, including Mapreduce and Tez
http://:8188/ws/v1/applicationhistory/apps

But when queried with application types with these APIs, it fails with 
NullpointerException.
http://:8188/ws/v1/applicationhistory/apps?applicationTypes=TEZ
http://:8188/ws/v1/applicationhistory/apps?applicationTypes=MAPREDUCE

NullPointerExceptionjava.lang.NullPointerException

Blocked on this issue as we are not able to run analytics on the tez job 
counters on the prod jobs. 


Timeline Logs:
|2017-02-22 11:47:57,183 WARN  webapp.GenericExceptionHandler 
(GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.server.webapp.WebServices.getApps(WebServices.java:195)
at 
org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices.getApps(AHSWebServices.java:96)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)


Complete stacktrace:
http://pastebin.com/bRgxVabf




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6109) Add an ability to convert ChildQueue to ParentQueue

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877573#comment-15877573
 ] 

Hadoop QA commented on YARN-6109:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6109 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853876/YARN-6109.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 29f0175fbbc8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 003ae00 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15035/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15035/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15035/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add an ability to convert ChildQueue to ParentQueue
> ---
>
> Key: YARN-6109
>  

[jira] [Updated] (YARN-6184) Introduce loading icon in each page of new YARN UI

2017-02-21 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6184:
--
Description: Add loading icon in each page in new YARN-UI. This will help 
in cases where we download large data to client side.  (was: Add loading icon 
in each page in new YARN-UI.)

> Introduce loading icon in each page of new YARN UI
> --
>
> Key: YARN-6184
> URL: https://issues.apache.org/jira/browse/YARN-6184
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6184.001.patch, YARN-6184.002.patch, 
> YARN-6184.003.patch, YARN-6184.004.patch, YARN-6184.005.patch, 
> YARN-6184.006.patch
>
>
> Add loading icon in each page in new YARN-UI. This will help in cases where 
> we download large data to client side.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6184) Introduce loading icon in each page of new YARN UI

2017-02-21 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-6184:
--
Summary: Introduce loading icon in each page of new YARN UI  (was: 
[YARN-3368] Introduce loading icon in each page of YARN-UI)

> Introduce loading icon in each page of new YARN UI
> --
>
> Key: YARN-6184
> URL: https://issues.apache.org/jira/browse/YARN-6184
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6184.001.patch, YARN-6184.002.patch, 
> YARN-6184.003.patch, YARN-6184.004.patch, YARN-6184.005.patch, 
> YARN-6184.006.patch
>
>
> Add loading icon in each page in new YARN-UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6211) Synchronization improvement in move and priority

2017-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877534#comment-15877534
 ] 

Sunil G edited comment on YARN-6211 at 2/22/17 6:01 AM:


Few more minor nit:
# There are unused imports in {{RMAppManager}}, please remove that also.
# Adding to [~rohithsharma] comments, in moveApplicationAcrossQueue and 
updateApplicationPriority, appId from request itself could be passed. But you 
can have null check for app, to ensure that app is not removed when call 
reaches at either of these two methods.
# To avoid taking appId reference from RMApp, you may not need to do in 
RMAppManager. Instead you can pass rmApp.getApplicationId() from 
ClientRMService api itself.


was (Author: sunilg):
One more minor nit:
# There are unused imports in {{RMAppManager}}, please remove that also.
# Adding to [~rohithsharma] comments, in moveApplicationAcrossQueue and 
updateApplicationPriority, appId from request itself could be passed. But you 
can have null check for app, to ensure that app is not removed when call 
reaches at either of these two methods.

> Synchronization improvement in move and priority
> 
>
> Key: YARN-6211
> URL: https://issues.apache.org/jira/browse/YARN-6211
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6211.001.patch
>
>
> Application appid is wrongly taken for synchronization



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6211) Synchronization improvement in move and priority

2017-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877534#comment-15877534
 ] 

Sunil G commented on YARN-6211:
---

One more minor nit:
# There are unused imports in {{RMAppManager}}, please remove that also.
# Adding to [~rohithsharma] comments, in moveApplicationAcrossQueue and 
updateApplicationPriority, appId from request itself could be passed. But you 
can have null check for app, to ensure that app is not removed when call 
reaches at either of these two methods.

> Synchronization improvement in move and priority
> 
>
> Key: YARN-6211
> URL: https://issues.apache.org/jira/browse/YARN-6211
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6211.001.patch
>
>
> Application appid is wrongly taken for synchronization



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877530#comment-15877530
 ] 

Rohith Sharma K S commented on YARN-6069:
-

[~gtCarrera9] Basically CORS is required when there is different domains are 
accessing REST URI. As of today, collectors are started as auxiliary service in 
NM. And TimelineClient is only application-master which will be running on 
Hadoop cluster. Hadoop cluster will be installed under same domain with which 
TimelineCollectors do NOT require CORS support to publish entities.
OTOH, when we support offline collectors then CORS is mandatorily required. 

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch, 
> YARN-6069-YARN-5355.0002.patch, YARN-6069-YARN-5355.0003.patch, 
> YARN-6069-YARN-5355.0004.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6211) Synchronization improvement in move and priority

2017-02-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877505#comment-15877505
 ] 

Rohith Sharma K S commented on YARN-6211:
-

Thanks Bibin for the patch.. One comment
# For both methods in RMAppManager i.e moveApplicationAcrossQueue and 
updateApplicationPriority, pass the RMApp reference which has been validated by 
clientRMService. Do not again get it from context in RMAppManager. It would 
lead to NPE in very corner cases. 
# Use the RMApp reference to get applicationId in synchronized block. See 
updateApplicationTimeout method for reference.  

> Synchronization improvement in move and priority
> 
>
> Key: YARN-6211
> URL: https://issues.apache.org/jira/browse/YARN-6211
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6211.001.patch
>
>
> Application appid is wrongly taken for synchronization



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6109) Add an ability to convert ChildQueue to ParentQueue

2017-02-21 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877502#comment-15877502
 ] 

Xuan Gong commented on YARN-6109:
-

[~leftnoteasy], [~Naganarasimha]

Thanks for the review comments. Uploaded a new patch to address your comments.

> Add an ability to convert ChildQueue to ParentQueue
> ---
>
> Key: YARN-6109
> URL: https://issues.apache.org/jira/browse/YARN-6109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6109.1.patch, YARN-6109.2.patch, 
> YARN-6109.rebase.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6109) Add an ability to convert ChildQueue to ParentQueue

2017-02-21 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6109:

Attachment: YARN-6109.2.patch

> Add an ability to convert ChildQueue to ParentQueue
> ---
>
> Key: YARN-6109
> URL: https://issues.apache.org/jira/browse/YARN-6109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6109.1.patch, YARN-6109.2.patch, 
> YARN-6109.rebase.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6213) Failure handling and retry for performFailover in RetryInvocationHandler

2017-02-21 Thread Botong Huang (JIRA)
Botong Huang created YARN-6213:
--

 Summary: Failure handling and retry for performFailover in 
RetryInvocationHandler 
 Key: YARN-6213
 URL: https://issues.apache.org/jira/browse/YARN-6213
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Botong Huang
Assignee: Botong Huang
Priority: Minor


In {{RetryInvocationHandler}}, when the method invocation fails, we reply on 
{{FailoverProxyProvider}} to performFailover and get a new proxy, so that we 
can retry the method invocation. 

However, the performFailover and get new proxy itself might fail (throw 
exception or return null proxy). This is not handled properly currently, we end 
up throwing the exception out of the while loop. Instead, we should catch the 
exception (or check for null proxy) and retry performFailover again, until the 
max fail over count reaches the maximum. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877489#comment-15877489
 ] 

Hadoop QA commented on YARN-6210:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 41m 
40s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 95m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6210 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853866/YARN-6210.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux aea153c517c1 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 003ae00 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15034/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15034/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FS: Node reservations can interfere with preemption
> 

[jira] [Commented] (YARN-6212) NodeManager metrics returning wrong negative values

2017-02-21 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877476#comment-15877476
 ] 

Miklos Szegedi commented on YARN-6212:
--

Thank you, [~abkshvn] for reporting this. It may be YARN-3933. Are you using 
fair scheduler?

> NodeManager metrics returning wrong negative values
> ---
>
> Key: YARN-6212
> URL: https://issues.apache.org/jira/browse/YARN-6212
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.3
>Reporter: Abhishek Shivanna
>
> It looks like the metrics returned by the NodeManager have negative values 
> for metrics that never should be negative. Here is an output form NM endpoint 
> {noformat}
> /jmx?qry=Hadoop:service=NodeManager,name=NodeManagerMetrics
> {noformat}
> {noformat}
> {
>   "beans" : [ {
> "name" : "Hadoop:service=NodeManager,name=NodeManagerMetrics",
> "modelerType" : "NodeManagerMetrics",
> "tag.Context" : "yarn",
> "tag.Hostname" : "",
> "ContainersLaunched" : 707,
> "ContainersCompleted" : 9,
> "ContainersFailed" : 124,
> "ContainersKilled" : 579,
> "ContainersIniting" : 0,
> "ContainersRunning" : 19,
> "AllocatedGB" : -26,
> "AllocatedContainers" : -5,
> "AvailableGB" : 252,
> "AllocatedVCores" : -5,
> "AvailableVCores" : 101,
> "ContainerLaunchDurationNumOps" : 718,
> "ContainerLaunchDurationAvgTime" : 18.0
>   } ]
> }
> {noformat}
> Is there any circumstance under which the value for AllocatedGB, 
> AllocatedContainers and AllocatedVCores go below 0? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877405#comment-15877405
 ] 

Karthik Kambatla commented on YARN-6210:


Thanks for the prompt review, Daniel. 

The updated patch incorporates all your suggestions but for the following:
# {{FSAppAttempt#fairShareStarvation()}}: I have reverted the meaning of 
_starved_. The reason for this is to ensure we don't mark an app whose demand 
is fully met, but the allocation is under its fairshare. The code also seems 
simpler and less confusing this way.
# {{TestFairScheduler.testReservationWithMultiplePriorities()}}: The 
reservation-at-lower-priority assert is retained. I did drop the asserts for 
scheduler resources, but retained the checks around running containers. Since 
the test is for verifying node reservation behavior, other asserts are 
misleading. 

> FS: Node reservations can interfere with preemption
> ---
>
> Key: YARN-6210
> URL: https://issues.apache.org/jira/browse/YARN-6210
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6210.1.patch, YARN-6210.2.patch, YARN-6210.3.patch
>
>
> Today, on a saturated cluster, apps with pending demand reserve nodes. A new 
> app might not be able to preempt resources because these nodes are already 
> reserved. This can be reproduced by the example in YARN-6151. 
> Since node reservations are to prevent starvation of apps requesting large 
> containers, triggering these reservations only on starved applications would 
> avoid this situation. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-6210:
---
Attachment: YARN-6210.3.patch

> FS: Node reservations can interfere with preemption
> ---
>
> Key: YARN-6210
> URL: https://issues.apache.org/jira/browse/YARN-6210
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6210.1.patch, YARN-6210.2.patch, YARN-6210.3.patch
>
>
> Today, on a saturated cluster, apps with pending demand reserve nodes. A new 
> app might not be able to preempt resources because these nodes are already 
> reserved. This can be reproduced by the example in YARN-6151. 
> Since node reservations are to prevent starvation of apps requesting large 
> containers, triggering these reservations only on starved applications would 
> avoid this situation. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3884) App History status not updated when RMContainer transitions from RESERVED to KILLED

2017-02-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877372#comment-15877372
 ] 

Rohith Sharma K S commented on YARN-3884:
-

bq. How about move "containerCreated" call to ATS in ALLOCATED state, and leave 
"containerFinished" call to ATS in FINISHED state? 
+1 this make sense to me. And it is debatable!!  Btw, ATSv2 do not track these 
containers by default because container metrics are published by NodeManager. I 
think it will be a good metrics to track how many reserved container allocation 
failure per application.  Thoughts? 

> App History status not updated when RMContainer transitions from RESERVED to 
> KILLED
> ---
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3884.0002.patch, 
> YARN-3884.0003.patch, YARN-3884.0004.patch, YARN-3884.0005.patch, 
> YARN-3884.0006.patch, YARN-3884.0007.patch, YARN-3884.0008.patch
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1435849994778_0001_01 
> container=Container: [ContainerId: 
> container_e24_1435849994778_0001_01_14, NodeId: host-10-19-92-117:64318, 
> NodeHttpAddress: host-10-19-92-117:65321, Resource: , 
> Priority: 20, Token: null, ] queue=default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.0846906, absoluteUsedCapacity=0.4166, numApps=1, 
> numContainers=5 clusterResource=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.default stats: default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.5016286, absoluteUsedCapacity=0.5, numApps=1, numContainers=6
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> 

[jira] [Commented] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877352#comment-15877352
 ] 

Hadoop QA commented on YARN-4985:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
27s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
10s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
20s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 hadoop-yarn-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 40s{color} | {color:orange} root: The patch generated 33 new + 13 unchanged 
- 2 fixed = 46 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
11s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 hadoop-yarn-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-schema in the 
patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-client in the 
patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase
 generated 15 new + 0 unchanged - 0 fixed = 15 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-schema
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  2m 
10s{color} | {color:red} 

[jira] [Created] (YARN-6212) NodeManager metrics returning wrong negative values

2017-02-21 Thread Abhishek Shivanna (JIRA)
Abhishek Shivanna created YARN-6212:
---

 Summary: NodeManager metrics returning wrong negative values
 Key: YARN-6212
 URL: https://issues.apache.org/jira/browse/YARN-6212
 Project: Hadoop YARN
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.7.3
Reporter: Abhishek Shivanna


It looks like the metrics returned by the NodeManager have negative values for 
metrics that never should be negative. Here is an output form NM endpoint 
{noformat}
/jmx?qry=Hadoop:service=NodeManager,name=NodeManagerMetrics
{noformat}
{noformat}
{
  "beans" : [ {
"name" : "Hadoop:service=NodeManager,name=NodeManagerMetrics",
"modelerType" : "NodeManagerMetrics",
"tag.Context" : "yarn",
"tag.Hostname" : "",
"ContainersLaunched" : 707,
"ContainersCompleted" : 9,
"ContainersFailed" : 124,
"ContainersKilled" : 579,
"ContainersIniting" : 0,
"ContainersRunning" : 19,
"AllocatedGB" : -26,
"AllocatedContainers" : -5,
"AvailableGB" : 252,
"AllocatedVCores" : -5,
"AvailableVCores" : 101,
"ContainerLaunchDurationNumOps" : 718,
"ContainerLaunchDurationAvgTime" : 18.0
  } ]
}
{noformat}

Is there any circumstance under which the value for AllocatedGB, 
AllocatedContainers and AllocatedVCores go below 0? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5602) Utils for Federation State and Policy Store

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877309#comment-15877309
 ] 

Hadoop QA commented on YARN-5602:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
12s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
47s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 32s{color} | {color:orange} root: The patch generated 3 new + 4 unchanged - 
0 fixed = 7 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 4 new + 162 unchanged - 0 fixed = 166 total (was 162) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5602 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853842/YARN-5602-YARN-2915.v3.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 14ccefe8ffb7 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-6184) [YARN-3368] Introduce loading icon in each page of YARN-UI

2017-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877295#comment-15877295
 ] 

Sunil G commented on YARN-6184:
---

Committing this patch shortly.

> [YARN-3368] Introduce loading icon in each page of YARN-UI
> --
>
> Key: YARN-6184
> URL: https://issues.apache.org/jira/browse/YARN-6184
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-6184.001.patch, YARN-6184.002.patch, 
> YARN-6184.003.patch, YARN-6184.004.patch, YARN-6184.005.patch, 
> YARN-6184.006.patch
>
>
> Add loading icon in each page in new YARN-UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-02-21 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877293#comment-15877293
 ] 

Sangjin Lee commented on YARN-4985:
---

Thanks for the update [~haibochen].

bq. With this new module, we do still have the coprocessor installation issue. 
I am totally speculating here. Is it possible to configure maven so that it 
will combine hbase-schema and hbase-server into one jar, as a workaround?

It might be doable, but it may involve a fairly advance maven trick to pull 
that off. I haven't thought about what it might take.

Going back to the original question, were you able to see if there is any way 
we can limit the dependency coming from the coprocessor code? If that is 
possible, that is the best case scenario.

> Refactor the coprocessor code & other definition classes into independent 
> packages
> --
>
> Key: YARN-4985
> URL: https://issues.apache.org/jira/browse/YARN-4985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-4985-YARN-5355.poc.patch, 
> YARN-4985-YARN-5355.prelim.patch
>
>
> As part of the coprocessor deployment, we have realized that it will be much 
> cleaner to have the coprocessor code sit in a package which does not depend 
> on hadoop-yarn-server classes. It only needs hbase and other util classes.
> These util classes and tag definition related classes can be refactored into 
> their own independent "definition" class package so that making changes to 
> coprocessor code, upgrading hbase, deploying hbase on a different hadoop 
> version cluster etc all becomes operationally much easier and less error 
> prone to having different library jars etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6093) Invalid AMRM token exception when using FederationRMFailoverProxyProvider at AMRMtoken renewal during a RM failover

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877253#comment-15877253
 ] 

Hadoop QA commented on YARN-6093:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
52s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
48s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
54s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6093 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853838/YARN-6093-YARN-2915.v5.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2127028b7d56 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 
16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 8d9be84 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15032/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15032/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Invalid AMRM token exception when using 

[jira] [Commented] (YARN-5946) Create YarnConfigurationStore interface and InMemoryConfigurationStore class

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877193#comment-15877193
 ] 

Hadoop QA commented on YARN-5946:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 0s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 25s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5946 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853317/YARN-5946-YARN-5734.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ad4c0d5d8c11 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5734 / 6e1a544 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15031/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15031/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/15031/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15031/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.




[jira] [Commented] (YARN-6109) Add an ability to convert ChildQueue to ParentQueue

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877190#comment-15877190
 ] 

Hadoop QA commented on YARN-6109:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
11s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  LeafQueue is incompatible with expected argument type String in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.reinitialize(CSQueue,
 Resource)  At ParentQueue.java:argument type String in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.reinitialize(CSQueue,
 Resource)  At ParentQueue.java:[line 325] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6109 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853810/YARN-6109.rebase.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fe5f2ddfb31b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 003ae00 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/15029/artifact/patchprocess/new-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html
 |
| unit | 

[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set

2017-02-21 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877186#comment-15877186
 ] 

Jian He commented on YARN-6153:
---

I'm thinking how to make the code simpler,
Can this code
{code}
  if (appAttempt.submissionContext
.getKeepContainersAcrossApplicationAttempts()
  && !appAttempt.submissionContext.getUnmanagedAM()) {
// See if we should retain containers for non-unmanaged applications
if (!appAttempt.shouldCountTowardsMaxAttemptRetry()) {
  // Premption, hardware failures, NM resync doesn't count towards
  // app-failures and so we should retain containers.
  keepContainersAcrossAppAttempts = true;
} else if (!appAttempt.maybeLastAttempt) {
  // Not preemption, hardware failures or NM resync.
  // Not last-attempt too - keep containers.
  keepContainersAcrossAppAttempts = true;
} else {
  // After AM reset window time, it is no longer the last attempt.
  long attemptFailuresValidityInterval = appAttempt
  .submissionContext.getAttemptFailuresValidityInterval();
  long end = System.currentTimeMillis();
  if (attemptFailuresValidityInterval > 0
  && appAttempt.getStartTime() < (end
  - attemptFailuresValidityInterval)) {
keepContainersAcrossAppAttempts = true;
  }
}
  }
{code}
 be replaced as same as RMAppImpl ?
{code}
if (KeepContainersInSubmissonContext && app.getNumFailedAppAttempts() >= 
app.getMaxAttempts()) {
   KeepContainers = true
}
{code}
This makes it future-proof that both places share the same logic 

> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch, 
> YARN-6153.003.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6093) Invalid AMRM token exception when using FederationRMFailoverProxyProvider at AMRMtoken renewal during a RM failover

2017-02-21 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877167#comment-15877167
 ] 

Jian He commented on YARN-6093:
---

lgtm, thanks [~botong], [~subru]

> Invalid AMRM token exception when using FederationRMFailoverProxyProvider at 
> AMRMtoken renewal during a RM failover
> ---
>
> Key: YARN-6093
> URL: https://issues.apache.org/jira/browse/YARN-6093
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: YARN-2915
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: 
> YARN-6093-08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch, 
> YARN-6093-git08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch, 
> YARN-6093.v1.patch, YARN-6093-YARN-2915.v1.patch, 
> YARN-6093-YARN-2915.v2.patch, YARN-6093-YARN-2915.v3.patch, 
> YARN-6093-YARN-2915.v4.patch, YARN-6093-YARN-2915.v5.patch
>
>
> AMRMProxy uses expired AMRMToken to talk to RM, leading to the "Invalid 
> AMRMToken" exception. The bug is triggered when both conditions are met: 
> 1. RM rolls master key and renews AMRMToken for a running AM.
> 2. Existing RPC connection between AMRMProxy and RM drops and attempt to 
> reconnect via failover in FederationRMFailoverProxyProvider. 
> Here's what happened: 
> In DefaultRequestInterceptor.init(), we create a proxy ugi, load it with the 
> initial AMRMToken issued by RM, and used it for initiating rmClient. Then we 
> arrive at FederationRMFailoverProxyProvider.init(), a full copy of ugi tokens 
> are saved locally, create an actual RM proxy and setup the RPC connection. 
> Later when RM rolls master key and issues a new AMRMToken, 
> DefaultRequestInterceptor.updateAMRMToken() updates it into the proxy ugi. 
> However the new token is never used, until the existing RPC connection 
> between AMRMProxy and RM drops for other reasons (say master RM crashes). 
> When we try to reconnect, since the service name of the new AMRMToken is not 
> yet set correctly in DefaultRequestInterceptor.updateAMRMToken(), RPC found 
> no valid AMRMToken when trying to setup a new connection. We first hit a 
> "Client cannot authenticate via:[TOKEN]" exception. This is expected. 
> Next, FederationRMFailoverProxyProvider fails over, we reset the service 
> token via ClientRMProxy.getRMAddress() and reconnect. Supposedly this would 
> have worked. 
> However since DefaultRequestInterceptor does not use the proxy user for later 
> calls to rmClient, when performing failover in 
> FederationRMFailoverProxyProvider, we are not in the proxy user. Currently 
> the code solve the problem by reloading the current ugi with all tokens saved 
> locally in originalTokens in method addOriginalTokens(). The problem is that 
> the original AMRMToken loaded is no longer accepted by RM, and thus we keep 
> hitting the "Invalid AMRMToken" exception until AM fails. 
> The correct way is that rather than saving the original tokens in the proxy 
> ugi, we save the original ugi itself. Every time we perform failover and 
> create the new RM proxy, we use the original ugi, which is always loaded with 
> the up-to-date AMRMToken. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5602) Utils for Federation State and Policy Store

2017-02-21 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5602:
---
Attachment: YARN-5602-YARN-2915.v3.patch

> Utils for Federation State and Policy Store
> ---
>
> Key: YARN-5602
> URL: https://issues.apache.org/jira/browse/YARN-5602
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>  Labels: oct16-medium
> Attachments: YARN-5602-YARN-2915.v1.patch, 
> YARN-5602-YARN-2915.v2.patch, YARN-5602-YARN-2915.v3.patch
>
>
> This JIRA tracks the creation of utils for Federation State and Policy Store 
> such as Error Codes, Exceptions...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6093) Invalid AMRM token exception when using FederationRMFailoverProxyProvider at AMRMtoken renewal during a RM failover

2017-02-21 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877107#comment-15877107
 ] 

Botong Huang commented on YARN-6093:


v5 patch uploaded. Fixed another issue: when performFailover fails, should fall 
back to the old proxy instance, rather than return null, causing 
NullPointerException later. 

> Invalid AMRM token exception when using FederationRMFailoverProxyProvider at 
> AMRMtoken renewal during a RM failover
> ---
>
> Key: YARN-6093
> URL: https://issues.apache.org/jira/browse/YARN-6093
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: YARN-2915
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: 
> YARN-6093-08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch, 
> YARN-6093-git08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch, 
> YARN-6093.v1.patch, YARN-6093-YARN-2915.v1.patch, 
> YARN-6093-YARN-2915.v2.patch, YARN-6093-YARN-2915.v3.patch, 
> YARN-6093-YARN-2915.v4.patch, YARN-6093-YARN-2915.v5.patch
>
>
> AMRMProxy uses expired AMRMToken to talk to RM, leading to the "Invalid 
> AMRMToken" exception. The bug is triggered when both conditions are met: 
> 1. RM rolls master key and renews AMRMToken for a running AM.
> 2. Existing RPC connection between AMRMProxy and RM drops and attempt to 
> reconnect via failover in FederationRMFailoverProxyProvider. 
> Here's what happened: 
> In DefaultRequestInterceptor.init(), we create a proxy ugi, load it with the 
> initial AMRMToken issued by RM, and used it for initiating rmClient. Then we 
> arrive at FederationRMFailoverProxyProvider.init(), a full copy of ugi tokens 
> are saved locally, create an actual RM proxy and setup the RPC connection. 
> Later when RM rolls master key and issues a new AMRMToken, 
> DefaultRequestInterceptor.updateAMRMToken() updates it into the proxy ugi. 
> However the new token is never used, until the existing RPC connection 
> between AMRMProxy and RM drops for other reasons (say master RM crashes). 
> When we try to reconnect, since the service name of the new AMRMToken is not 
> yet set correctly in DefaultRequestInterceptor.updateAMRMToken(), RPC found 
> no valid AMRMToken when trying to setup a new connection. We first hit a 
> "Client cannot authenticate via:[TOKEN]" exception. This is expected. 
> Next, FederationRMFailoverProxyProvider fails over, we reset the service 
> token via ClientRMProxy.getRMAddress() and reconnect. Supposedly this would 
> have worked. 
> However since DefaultRequestInterceptor does not use the proxy user for later 
> calls to rmClient, when performing failover in 
> FederationRMFailoverProxyProvider, we are not in the proxy user. Currently 
> the code solve the problem by reloading the current ugi with all tokens saved 
> locally in originalTokens in method addOriginalTokens(). The problem is that 
> the original AMRMToken loaded is no longer accepted by RM, and thus we keep 
> hitting the "Invalid AMRMToken" exception until AM fails. 
> The correct way is that rather than saving the original tokens in the proxy 
> ugi, we save the original ugi itself. Every time we perform failover and 
> create the new RM proxy, we use the original ugi, which is always loaded with 
> the up-to-date AMRMToken. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6093) Invalid AMRM token exception when using FederationRMFailoverProxyProvider at AMRMtoken renewal during a RM failover

2017-02-21 Thread Botong Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-6093:
---
Attachment: YARN-6093-YARN-2915.v5.patch

> Invalid AMRM token exception when using FederationRMFailoverProxyProvider at 
> AMRMtoken renewal during a RM failover
> ---
>
> Key: YARN-6093
> URL: https://issues.apache.org/jira/browse/YARN-6093
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: amrmproxy, federation
>Affects Versions: YARN-2915
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: 
> YARN-6093-08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch, 
> YARN-6093-git08dc09581230ba595ce48fe7d3bc4eb2b6f98091.v4.patch, 
> YARN-6093.v1.patch, YARN-6093-YARN-2915.v1.patch, 
> YARN-6093-YARN-2915.v2.patch, YARN-6093-YARN-2915.v3.patch, 
> YARN-6093-YARN-2915.v4.patch, YARN-6093-YARN-2915.v5.patch
>
>
> AMRMProxy uses expired AMRMToken to talk to RM, leading to the "Invalid 
> AMRMToken" exception. The bug is triggered when both conditions are met: 
> 1. RM rolls master key and renews AMRMToken for a running AM.
> 2. Existing RPC connection between AMRMProxy and RM drops and attempt to 
> reconnect via failover in FederationRMFailoverProxyProvider. 
> Here's what happened: 
> In DefaultRequestInterceptor.init(), we create a proxy ugi, load it with the 
> initial AMRMToken issued by RM, and used it for initiating rmClient. Then we 
> arrive at FederationRMFailoverProxyProvider.init(), a full copy of ugi tokens 
> are saved locally, create an actual RM proxy and setup the RPC connection. 
> Later when RM rolls master key and issues a new AMRMToken, 
> DefaultRequestInterceptor.updateAMRMToken() updates it into the proxy ugi. 
> However the new token is never used, until the existing RPC connection 
> between AMRMProxy and RM drops for other reasons (say master RM crashes). 
> When we try to reconnect, since the service name of the new AMRMToken is not 
> yet set correctly in DefaultRequestInterceptor.updateAMRMToken(), RPC found 
> no valid AMRMToken when trying to setup a new connection. We first hit a 
> "Client cannot authenticate via:[TOKEN]" exception. This is expected. 
> Next, FederationRMFailoverProxyProvider fails over, we reset the service 
> token via ClientRMProxy.getRMAddress() and reconnect. Supposedly this would 
> have worked. 
> However since DefaultRequestInterceptor does not use the proxy user for later 
> calls to rmClient, when performing failover in 
> FederationRMFailoverProxyProvider, we are not in the proxy user. Currently 
> the code solve the problem by reloading the current ugi with all tokens saved 
> locally in originalTokens in method addOriginalTokens(). The problem is that 
> the original AMRMToken loaded is no longer accepted by RM, and thus we keep 
> hitting the "Invalid AMRMToken" exception until AM fails. 
> The correct way is that rather than saving the original tokens in the proxy 
> ugi, we save the original ugi itself. Every time we perform failover and 
> create the new RM proxy, we use the original ugi, which is always loaded with 
> the up-to-date AMRMToken. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4779) Fix AM container allocation logic in SLS

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877099#comment-15877099
 ] 

Hadoop QA commented on YARN-4779:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-tools_hadoop-sls generated 0 new + 20 
unchanged - 3 fixed = 20 total (was 23) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4779 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853807/YARN-4779.5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux aaedd2a8870a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 003ae00 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15028/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15028/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix AM container allocation logic in SLS
> 
>
> Key: YARN-4779
> URL: https://issues.apache.org/jira/browse/YARN-4779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>  Labels: oct16-medium
> Attachments: YARN-4779.1.patch, YARN-4779.2.patch, 

[jira] [Commented] (YARN-6143) Fix incompatible issue caused by YARN-3583

2017-02-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877026#comment-15877026
 ] 

Wangda Tan commented on YARN-6143:
--

+1, will commit it tomorrow if no opposite opinions.

> Fix incompatible issue caused by YARN-3583
> --
>
> Key: YARN-6143
> URL: https://issues.apache.org/jira/browse/YARN-6143
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: rolling upgrade
>Reporter: Wangda Tan
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6143.0001.patch, YARN-6143.0002.patch, 
> YARN-6143.0003.patch, YARN-6143.0004.patch, YARN-6143.0005.patch
>
>
> As mentioned by comment: 
> https://issues.apache.org/jira/browse/YARN-6142?focusedCommentId=15852009=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15852009.
>  We need to fix the incompatible issue caused by YARN-3583.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6143) Fix incompatible issue caused by YARN-3583

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877020#comment-15877020
 ] 

Hadoop QA commented on YARN-6143:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m  
0s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
23s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}100m 
50s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}277m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6143 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853793/YARN-6143.0005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 50dde15f9b24 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2158496 |
| Default Java 

[jira] [Commented] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877010#comment-15877010
 ] 

Daniel Templeton commented on YARN-6210:


Thanks for the patch, [~kasha].  A few comments:

* {{ResourceCalculator.compare()}}: {{singleType}} needs to be explained 
better.  I still don't know what it means.
* {{TestFailSchedulerPreemption.writeAllocFile()}}: should _preemptable_ maybe 
be _preemptable-1_?
* {{FSAppAttempt.isStarvedForFairShare()}}: No likey the "0 > ..."  There's no 
reason for the weird inverted order, and _less than x_ is easier to logic about.
* {{FSAppAttempt.isStarved()}}: looks like it can be private...
* {{FSAppAttempt.fairShareStarvation()}}: the meaning of _starved_ changed.  It 
would be nice to document in a comment why it's OK that _starved_ doesn't 
account for the threshold whereas the return value does.
* {{TestFairScheduler.testReservationWhileMultiplePriorities()}}: The comment 
{{Create one application and take up all resources}} isn't actually true, is it?
* {{TestFairScheduler.testReservationWhileMultiplePriorities()}}: Looks like 
you took out the tests that the scheduler resources are as expected at the 
various steps and the test that the reservation is still for the lower priority 
request.  Not critical, but I would leave them in if it were me.
* {{TestFairScheduler.testReservationsStrictLocality}}: I like messages in my 
asserts.

> FS: Node reservations can interfere with preemption
> ---
>
> Key: YARN-6210
> URL: https://issues.apache.org/jira/browse/YARN-6210
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6210.1.patch, YARN-6210.2.patch
>
>
> Today, on a saturated cluster, apps with pending demand reserve nodes. A new 
> app might not be able to preempt resources because these nodes are already 
> reserved. This can be reproduced by the example in YARN-6151. 
> Since node reservations are to prevent starvation of apps requesting large 
> containers, triggering these reservations only on starved applications would 
> avoid this situation. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-02-21 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-4985:
-
Attachment: YARN-4985-YARN-5355.poc.patch

> Refactor the coprocessor code & other definition classes into independent 
> packages
> --
>
> Key: YARN-4985
> URL: https://issues.apache.org/jira/browse/YARN-4985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-4985-YARN-5355.poc.patch, 
> YARN-4985-YARN-5355.prelim.patch
>
>
> As part of the coprocessor deployment, we have realized that it will be much 
> cleaner to have the coprocessor code sit in a package which does not depend 
> on hadoop-yarn-server classes. It only needs hbase and other util classes.
> These util classes and tag definition related classes can be refactored into 
> their own independent "definition" class package so that making changes to 
> coprocessor code, upgrading hbase, deploying hbase on a different hadoop 
> version cluster etc all becomes operationally much easier and less error 
> prone to having different library jars etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3884) App History status not updated when RMContainer transitions from RESERVED to KILLED

2017-02-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876799#comment-15876799
 ] 

Wangda Tan commented on YARN-3884:
--

Looked at this JIRA again, I would suggest to not expose RESERVED containers to 
ATS. Since the RESERVED container is more like an internal state. And scheduler 
doesn't think it is a normal container as well (e.g. it doesn't invoke 
FinishedTransition). IAW, it is just a placeholder to allocate containers.

How about move "containerCreated" call to ATS in ALLOCATED state, and leave 
"containerFinished" call to ATS in FINISHED state? Please feel free to share 
your thoughts.


> App History status not updated when RMContainer transitions from RESERVED to 
> KILLED
> ---
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3884.0002.patch, 
> YARN-3884.0003.patch, YARN-3884.0004.patch, YARN-3884.0005.patch, 
> YARN-3884.0006.patch, YARN-3884.0007.patch, YARN-3884.0008.patch
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1435849994778_0001_01 
> container=Container: [ContainerId: 
> container_e24_1435849994778_0001_01_14, NodeId: host-10-19-92-117:64318, 
> NodeHttpAddress: host-10-19-92-117:65321, Resource: , 
> Priority: 20, Token: null, ] queue=default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.0846906, absoluteUsedCapacity=0.4166, numApps=1, 
> numContainers=5 clusterResource=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.default stats: default: capacity=0.2, 
> absoluteCapacity=0.2, usedResources=, 
> usedCapacity=2.5016286, absoluteUsedCapacity=0.5, numApps=1, numContainers=6
> 2015-07-02 20:45:31,191 INFO 
> 

[jira] [Commented] (YARN-6175) Negative vcore for resource needed to preempt

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876793#comment-15876793
 ] 

Hadoop QA commented on YARN-6175:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
53s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
43s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
45s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_121 with 
JDK v1.8.0_121 generated 0 new + 58 unchanged - 1 fixed = 58 total (was 59) 
{color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
52s{color} | {color:green} hadoop-yarn-project_hadoop-yarn-jdk1.7.0_121 with 
JDK v1.7.0_121 generated 0 new + 68 unchanged - 2 fixed = 68 total (was 70) 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 100 unchanged - 2 fixed = 100 total (was 102) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF 

[jira] [Commented] (YARN-6197) CS Leaf queue am usage gets updated for unmanaged AM

2017-02-21 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876771#comment-15876771
 ] 

Wangda Tan commented on YARN-6197:
--

I would prefer to keep logic as-is, there're two major purposes of 
AM-percentage. a. Avoid all resource in a queue consumed by AM, b. Avoid too 
many concurrent apps running. 

If we assume AM resource of unmanaged AM to zero, under the same configuration 
(by default {{maximum-applications}} set to 10k), there're too many apps can 
get started. So it will change the behavior.

And also, under existing logic, LeafQueue skips apps until it find an app can 
fit in the available AM resource quota, which makes unmanaged AM get activated 
when queue doesn't have free AM resource quota.

> CS Leaf queue am usage gets updated for unmanaged AM
> 
>
> Key: YARN-6197
> URL: https://issues.apache.org/jira/browse/YARN-6197
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>
> {{LeafQueue#activateApplication()}} for unmanaged AM  the am_usage is updated 
> with scheduler minimum allocation size. Cluster resource/AM limit headroom 
> for other apps in queue will get reduced .
> Solution: FicaScheduler unManagedAM flag can be used to check AM type.
> Based on flag the queueusage need to be updated during activation and removal
> Thoughts??



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6109) Add an ability to convert ChildQueue to ParentQueue

2017-02-21 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6109:

Attachment: YARN-6109.rebase.patch

rebase the patch

> Add an ability to convert ChildQueue to ParentQueue
> ---
>
> Key: YARN-6109
> URL: https://issues.apache.org/jira/browse/YARN-6109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6109.1.patch, YARN-6109.rebase.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-21 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876765#comment-15876765
 ] 

Li Lu commented on YARN-6069:
-

Sorry to chime in this late, but one general question about CORS itself. I'm 
not an expert in this area so my concern may sound silly. In ATS v1, the only 
server will serve as both reader and writer server, so my feeling is the CORS 
setting will affect both sides? In ATS v2, we're only applying this setting to 
the reader server, but not on collectors. Is this generally fine? Are writer 
APIs irrelevant in this case? Or, is this difference significant enough that we 
need to separate configs or specially note this? Thanks! 

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch, 
> YARN-6069-YARN-5355.0002.patch, YARN-6069-YARN-5355.0003.patch, 
> YARN-6069-YARN-5355.0004.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4779) Fix AM container allocation logic in SLS

2017-02-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4779:
-
Attachment: YARN-4779.5.patch

> Fix AM container allocation logic in SLS
> 
>
> Key: YARN-4779
> URL: https://issues.apache.org/jira/browse/YARN-4779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>  Labels: oct16-medium
> Attachments: YARN-4779.1.patch, YARN-4779.2.patch, YARN-4779.3.patch, 
> YARN-4779.4.patch, YARN-4779.5.patch
>
>
> Currently, SLS uses unmanaged AM for simulated map-reduce applications. And 
> first allocated container for each app is considered to be the master 
> container.
> This could be problematic when preemption happens. CapacityScheduler preempt 
> AM container at lowest priority, but the simulated AM container isn't 
> recognized by scheduler -- it is a normal container from scheduler's 
> perspective.
> This JIRA tries to fix this logic: do the real AM allocation instead of using 
> unmanaged AM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6146) Add Builder methods for TimelineEntityFilters

2017-02-21 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-6146:


Assignee: Haibo Chen

> Add Builder methods for TimelineEntityFilters
> -
>
> Key: YARN-6146
> URL: https://issues.apache.org/jira/browse/YARN-6146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
>
> The timeline filters are evolving and can be add more and more filters. It is 
> better to start using Builder methods rather than changing constructor every 
> time for adding new filters. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-02-21 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876620#comment-15876620
 ] 

Haibo Chen commented on YARN-4985:
--

Apologies for my delayed response! I did not have answers to your questions, so 
I took some time to actually try it out. Attached is my POC patch.

I managed to extract a common module that both client and server code depend 
on. The number of dependencies of the new schema module is much smaller, but 
still include hadoop-yarn-api (for use of ApplicationId) and 
hadoop-yarn-server-applicationhistoryservice (for use of GenericObjectMapper), 
as I think ValueConverters belong to schema. With this new module, the 
undesirable dependency of hbase-server module on hbase-client is no longer 
necessary.

I was not able to, however, redistribute tests in hbase-tests into client and 
server modules. The reason is that all tests, regardless of whether it is 
server or client, depend on HBaseTestingUtility which only works with 
hadoop-common-2.5.1. Therefore, in that sense, I think both tests for 
hbase-client and tests for hbase-server should still reside in the same module.

With this new module, we do still have the coprocessor installation issue. I am 
totally speculating here. Is it possible to configure maven so that it will 
combine hbase-schema and hbase-server into one jar, as a workaround?

> Refactor the coprocessor code & other definition classes into independent 
> packages
> --
>
> Key: YARN-4985
> URL: https://issues.apache.org/jira/browse/YARN-4985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-4985-YARN-5355.prelim.patch
>
>
> As part of the coprocessor deployment, we have realized that it will be much 
> cleaner to have the coprocessor code sit in a package which does not depend 
> on hadoop-yarn-server classes. It only needs hbase and other util classes.
> These util classes and tag definition related classes can be refactored into 
> their own independent "definition" class package so that making changes to 
> coprocessor code, upgrading hbase, deploying hbase on a different hadoop 
> version cluster etc all becomes operationally much easier and less error 
> prone to having different library jars etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6191) CapacityScheduler preemption by container priority can be problematic for MapReduce

2017-02-21 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876567#comment-15876567
 ] 

Chris Douglas commented on YARN-6191:
-

bq. However there's still an issue because the preemption message is too 
general. For example, if the message says "going to preempt 60GB of resources" 
and the AM kills 10 reducers that are 6GB each on 6 different nodes, the RM can 
still kill the maps because the RM needed 60GB of contiguous resources.

I haven't followed the modifications to the preemption policy, so I don't know 
if the AM will be selected as a victim again even after satisfying the contract 
(it should not). The preemption message should be expressive enough to encode 
this, if that's the current behavior. If the RM will only accept 60GB of 
resources from a single node, then that can be encoded in a ResourceRequest in 
the preemption message.

Even if everything behaves badly, killing the reducers is still correct, right? 
If the job is still entitled to resources, then it should reschedule the map 
tasks before the reducers. There are still interleavings of requests that could 
result in the same behavior described in this JIRA, but they'd be stunningly 
unlucky.

bq. I still wonder about the logic of preferring lower container priorities 
regardless of how long they've been running. I'm not sure container priority 
always translates well to how important a container is to the application, and 
we might be better served by preferring to minimize total lost work regardless 
of container priority.

All of the options [~sunilg] suggests are fine heuristics, but the application 
has the best view of the tradeoffs. For example, a long-running container might 
be amortizing the cost of scheduling short-lived tasks, and might actually be 
cheap to kill. If the preemption message is not accurately reporting the 
contract the RM is enforcing, then we should absolutely fix that. But I think 
this is a MapReduce problem, ultimately.

> CapacityScheduler preemption by container priority can be problematic for 
> MapReduce
> ---
>
> Key: YARN-6191
> URL: https://issues.apache.org/jira/browse/YARN-6191
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Reporter: Jason Lowe
>
> A MapReduce job with thousands of reducers and just a couple of maps left to 
> go was running in a preemptable queue.  Periodically other queues would get 
> busy and the RM would preempt some resources from the job, but it _always_ 
> picked the job's map tasks first because they use the lowest priority 
> containers.  Even though the reducers had a shorter running time, most were 
> spared but the maps were always shot.  Since the map tasks ran for a longer 
> time than the preemption period, the job was in a perpetual preemption loop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876503#comment-15876503
 ] 

Varun Saxena commented on YARN-6069:


Thanks [~rohithsharma] for the patch.
The configuration yarn.timeline-service.http-cross-origin.enabled is used for 
both ATSv1.x and ATSv2 but the behavior is slightly different which is not 
captured in yarn-default.xml. For instance, in ATSv1.x, if this configuration 
is enabled, CrossOriginFilterInitializer will be added automatically but this 
is not the case with ATSv2. How about the description below?
{code}
Flag to enable cross-origin (CORS) support for timeline service 
v1.x or Timeline Reader in
timeline service v2. For timeline service v2, also add 
org.apache.hadoop.security.HttpCrossOriginFilterInitializer
to the configuration hadoop.http.filter.initializers in 
core-site.xml.
{code}

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch, 
> YARN-6069-YARN-5355.0002.patch, YARN-6069-YARN-5355.0003.patch, 
> YARN-6069-YARN-5355.0004.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6143) Fix incompatible issue caused by YARN-3583

2017-02-21 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6143:
-
Attachment: YARN-6143.0005.patch

Attached ver.0005 patch fixed findbugs warnings and UT failures. 


> Fix incompatible issue caused by YARN-3583
> --
>
> Key: YARN-6143
> URL: https://issues.apache.org/jira/browse/YARN-6143
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: rolling upgrade
>Reporter: Wangda Tan
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6143.0001.patch, YARN-6143.0002.patch, 
> YARN-6143.0003.patch, YARN-6143.0004.patch, YARN-6143.0005.patch
>
>
> As mentioned by comment: 
> https://issues.apache.org/jira/browse/YARN-6142?focusedCommentId=15852009=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15852009.
>  We need to fix the incompatible issue caused by YARN-3583.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3884) App History status not updated when RMContainer transitions from RESERVED to KILLED

2017-02-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876467#comment-15876467
 ] 

Varun Saxena commented on YARN-3884:


Thanks [~sunilg] for your comments.

bq. Now coming to patch, i think FiCaSchedulerApp.unreserve is a more better to 
place raise an event to container. 
As discussed offline, the reason we are sending event to RMContainerImpl from 
LeafQueue is because we want to send it only when container is in reserved 
state. We do not need to handle this for increase reservation, for instance.

bq. there is a potential chance for invalid state transitions, but in a first 
glance it looks like basic events are handled 
I think other than KILLED and RELEASED, we do not need to handle any other 
transition. EXPIRE event would only come after the container has been acquired 
which wont be possible at RESERVED state. So I think we are fine here.

bq. Could we use FinishedTransition of RMConainerImpl which already handling 
updating finish time etc. 
I am fine with either. Some extra change will be required if we use 
FinishedTransition. We will have to check current state for container and not 
send attempt event and update attempt metrics. Also we will have to send 
RMContainerFinishedEvent from LeafQueue.

> App History status not updated when RMContainer transitions from RESERVED to 
> KILLED
> ---
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3884.0002.patch, 
> YARN-3884.0003.patch, YARN-3884.0004.patch, YARN-3884.0005.patch, 
> YARN-3884.0006.patch, YARN-3884.0007.patch, YARN-3884.0008.patch
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application attempt=appattempt_1435849994778_0001_01 
> container=Container: [ContainerId: 
> container_e24_1435849994778_0001_01_14, NodeId: host-10-19-92-117:64318, 
> NodeHttpAddress: host-10-19-92-117:65321, Resource: , 
> Priority: 20, Token: 

[jira] [Commented] (YARN-6175) Negative vcore for resource needed to preempt

2017-02-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876427#comment-15876427
 ] 

Yufei Gu commented on YARN-6175:


[~kasha], good catch. Resources.max isn't necessary. Uploaded the patch v3 to 
fix it.

> Negative vcore for resource needed to preempt
> -
>
> Key: YARN-6175
> URL: https://issues.apache.org/jira/browse/YARN-6175
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6175.001.patch, YARN-6175.branch-2.8.002.patch, 
> YARN-6175.branch-2.8.003.patch
>
>
> Both old preemption code (2.8 and before) and new preemption code could have 
> negative vcores while calculating resources needed to preempt.
> For old preemption, you can find following messages in RM logs:
> {code}
> Should preempt  
> {code}
> The related code is in method {{resourceDeficit()}}. 
> For new preemption code, there are no messages in RM logs, the related code 
> is in method {{fairShareStarvation()}}. 
> The negative value isn't only a display issue, but also may cause missing 
> necessary preemption. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6175) Negative vcore for resource needed to preempt

2017-02-21 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6175:
---
Attachment: YARN-6175.branch-2.8.003.patch

> Negative vcore for resource needed to preempt
> -
>
> Key: YARN-6175
> URL: https://issues.apache.org/jira/browse/YARN-6175
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6175.001.patch, YARN-6175.branch-2.8.002.patch, 
> YARN-6175.branch-2.8.003.patch
>
>
> Both old preemption code (2.8 and before) and new preemption code could have 
> negative vcores while calculating resources needed to preempt.
> For old preemption, you can find following messages in RM logs:
> {code}
> Should preempt  
> {code}
> The related code is in method {{resourceDeficit()}}. 
> For new preemption code, there are no messages in RM logs, the related code 
> is in method {{fairShareStarvation()}}. 
> The negative value isn't only a display issue, but also may cause missing 
> necessary preemption. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6176) Test whether FS preemption consider child queues over fair share if the parent is under fair share

2017-02-21 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876352#comment-15876352
 ] 

Yufei Gu commented on YARN-6176:


[~kasha] Yes. 

> Test whether FS preemption consider child queues over fair share if the 
> parent is under fair share
> --
>
> Key: YARN-6176
> URL: https://issues.apache.org/jira/browse/YARN-6176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6176.001.patch
>
>
> Port the test case in YARN-6151 to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6185) Apply SLIDER-1199 to yarn native services for blacklisting nodes

2017-02-21 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876287#comment-15876287
 ] 

Gour Saha commented on YARN-6185:
-

Got it. Committed to yarn-native-services branch.

> Apply SLIDER-1199 to yarn native services for blacklisting nodes
> 
>
> Key: YARN-6185
> URL: https://issues.apache.org/jira/browse/YARN-6185
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6185-yarn-native-services.001.patch, 
> YARN-6185-yarn-native-services.002.patch
>
>
> Enable blacklisting of nodes by Slider AM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6211) Synchronization improvement in move and priority

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876263#comment-15876263
 ] 

Hadoop QA commented on YARN-6211:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 20s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6211 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853766/YARN-6211.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f377d932b8b8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4804050 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15025/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15025/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15025/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Synchronization improvement in move and priority
> 
>
>  

[jira] [Commented] (YARN-6185) Apply SLIDER-1199 to yarn native services for blacklisting nodes

2017-02-21 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876172#comment-15876172
 ] 

Billie Rinaldi commented on YARN-6185:
--

The setRoleHistory method is used in 
[TestRoleHistoryUpdateBlacklist.groovy|https://github.com/apache/incubator-slider/blob/develop/slider-core/src/test/groovy/org/apache/slider/server/appmaster/model/history/TestRoleHistoryUpdateBlacklist.groovy#L50].
 We don't have an equivalent testing system in YARN native services yet, so I 
figured there was no need to add that visible-for-testing method.

> Apply SLIDER-1199 to yarn native services for blacklisting nodes
> 
>
> Key: YARN-6185
> URL: https://issues.apache.org/jira/browse/YARN-6185
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6185-yarn-native-services.001.patch, 
> YARN-6185-yarn-native-services.002.patch
>
>
> Enable blacklisting of nodes by Slider AM.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6211) Synchronization improvement in move and priority

2017-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876168#comment-15876168
 ] 

Sunil G commented on YARN-6211:
---

Nice catch [~bibinchundatt].

Ideally patch looks fine for me. A minor nit, we could avoid {{appId}} variable 
if needed. 
Pending jenkins, and will commit the patch tomorrow if there are no other 
objections.



> Synchronization improvement in move and priority
> 
>
> Key: YARN-6211
> URL: https://issues.apache.org/jira/browse/YARN-6211
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6211.001.patch
>
>
> Application appid is wrongly taken for synchronization



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6211) Synchronization improvement in move and priority

2017-02-21 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6211:
---
Attachment: YARN-6211.001.patch

> Synchronization improvement in move and priority
> 
>
> Key: YARN-6211
> URL: https://issues.apache.org/jira/browse/YARN-6211
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6211.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6211) Synchronization improvement in move and priority

2017-02-21 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6211:
---
Description: Application appid is wrongly taken for synchronization

> Synchronization improvement in move and priority
> 
>
> Key: YARN-6211
> URL: https://issues.apache.org/jira/browse/YARN-6211
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6211.001.patch
>
>
> Application appid is wrongly taken for synchronization



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6211) Synchronization improvement in move and priority

2017-02-21 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-6211:
--

 Summary: Synchronization improvement in move and priority
 Key: YARN-6211
 URL: https://issues.apache.org/jira/browse/YARN-6211
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875860#comment-15875860
 ] 

Hadoop QA commented on YARN-6069:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 5s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
33s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
21s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-6069 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853723/YARN-6069-YARN-5355.0004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| 

[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875839#comment-15875839
 ] 

Hadoop QA commented on YARN-6153:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 41s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6153 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853708/YARN-6153.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6e36c27e5a81 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6ba61d2 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15023/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15023/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15023/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> keepContainer 

[jira] [Updated] (YARN-6069) CORS support in timeline v2

2017-02-21 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6069:

Attachment: YARN-6069-YARN-5355.0004.patch

updating the patch with documentation change as discussed previously. And also 
added a config in yarn-default.xml file. 

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch, 
> YARN-6069-YARN-5355.0002.patch, YARN-6069-YARN-5355.0003.patch, 
> YARN-6069-YARN-5355.0004.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6164) Expose maximum-am-resource-percent in YarnClient

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875775#comment-15875775
 ] 

Hadoop QA commented on YARN-6164:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m  5s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Comparison of String objects using == or != in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getMaxAMPercentages()
   At AbstractCSQueue.java:== or != in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue.getMaxAMPercentages()
   At AbstractCSQueue.java:[line 458] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
| 

[jira] [Commented] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875764#comment-15875764
 ] 

Hadoop QA commented on YARN-6210:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
24s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 55s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.reservation.TestFairSchedulerPlanFollower
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6210 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853691/YARN-6210.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9dbfd9effa5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6ba61d2 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15022/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15022/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 

[jira] [Commented] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875753#comment-15875753
 ] 

Hadoop QA commented on YARN-6210:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
29s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m  1s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6210 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12853691/YARN-6210.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9ff8da4ca724 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6ba61d2 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15021/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15021/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 

[jira] [Updated] (YARN-6153) keepContainer does not work when AM retry window is set

2017-02-21 Thread kyungwan nam (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kyungwan nam updated YARN-6153:
---
Attachment: YARN-6153.003.patch

thanks for review.
I'm uploading a new patch that is fixed as your comment.


> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch, 
> YARN-6153.003.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6153) keepContainer does not work when AM retry window is set

2017-02-21 Thread kyungwan nam (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875688#comment-15875688
 ] 

kyungwan nam commented on YARN-6153:


{quote}
In RMAppAttemptImpl, why is getStartTime used for checking validityInterval. 
Also, given that shouldCountTowardsMaxAttemptRetry internally already contains 
checking validity interval, this code is not needed ? because it's already done 
in the if (!appAttempt.shouldCountTowardsMaxAttemptRetry()) { before.
{quote}

in the BaseFinalTransition.transition, appAttempt’s finishTime would be just a 
few milliseconds ago.
because the finishTime was set in the preceding state (FINAL_SAVING).
that's why getStartTime is used for checking validity interval.

> keepContainer does not work when AM retry window is set
> ---
>
> Key: YARN-6153
> URL: https://issues.apache.org/jira/browse/YARN-6153
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: kyungwan nam
> Attachments: YARN-6153.001.patch, YARN-6153.002.patch
>
>
> yarn.resourcemanager.am.max-attempts has been configured to 2 in my cluster.
> I submitted a YARN application (slider app) that keepContainers=true, 
> attemptFailuresValidityInterval=30.
> it did work properly when AM was failed firstly.
> all containers launched by previous AM were resynced with new AM (attempt2) 
> without killing containers.
> after 10 minutes, I thought AM failure count was reset by 
> attemptFailuresValidityInterval (5 minutes).
> but, all containers were killed when AM was failed secondly. (new AM attempt3 
> was launched properly)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3884) App History status not updated when RMContainer transitions from RESERVED to KILLED

2017-02-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875662#comment-15875662
 ] 

Sunil G commented on YARN-3884:
---

Few points to discuss:
1. RMContainer was never moved to a final state given container was in RESERVED 
state and it is no longer needed by scheduler (ie; container will not be moved 
to RUNNING or reservation was not success for this container). As per code, I 
feel container state will be stuck in RESERVED in this scenario. This was not 
an issue because scheduler has cleared this container from its data structures 
cleanly.
2. Holding to point 1, ideally we are looking to for a closure to such 
containers. So in brief, scheduler has to fire an event to indicate that a 
reserved container will no longer be used and RMContainer has to be moved to 
respective final stages.

Now coming to patch, i think {{FiCaSchedulerApp.unreserve}} is a more better to 
place raise an event to container. By this change, any container event could 
fall to RMContainer with state RESERVED. so there is a potential chance for 
invalid state transitions, but in a first glance it looks like basic events are 
handled at RESERVED state. May be you could look just to ensure whether i 
missed some.
Could we use FinishedTransition of RMConainerImpl which already handling 
updating finish time etc. Only extra thing is an event to RMAppAttempt which 
could be avoided if transition is coming from RESERVED. Will it be more better? 
Discussed with [~rohithsharma] , please add if missed some.

> App History status not updated when RMContainer transitions from RESERVED to 
> KILLED
> ---
>
> Key: YARN-3884
> URL: https://issues.apache.org/jira/browse/YARN-3884
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
> Environment: Suse11 Sp3
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy
> Attachments: 0001-YARN-3884.patch, Apphistory Container Status.jpg, 
> Elapsed Time.jpg, Test Result-Container status.jpg, YARN-3884.0002.patch, 
> YARN-3884.0003.patch, YARN-3884.0004.patch, YARN-3884.0005.patch, 
> YARN-3884.0006.patch, YARN-3884.0007.patch, YARN-3884.0008.patch
>
>
> Setup
> ===
> 1 NM 3072 16 cores each
> Steps to reproduce
> ===
> 1.Submit apps  to Queue 1 with 512 mb 1 core
> 2.Submit apps  to Queue 2 with 512 mb and 5 core
> lots of containers get reserved and unreserved in this case 
> {code}
> 2015-07-02 20:45:31,169 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0002_01_13 Container Transitioned from NEW to 
> RESERVED
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> Reserved container  application=application_1435849994778_0002 
> resource= queue=QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=1.6410257, absoluteUsedCapacity=0.65625, numApps=1, 
> numContainers=5 usedCapacity=1.6410257 absoluteUsedCapacity=0.65625 
> used= cluster=
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> Re-sorting assigned queue: root.QueueA stats: QueueA: capacity=0.4, 
> absoluteCapacity=0.4, usedResources=, 
> usedCapacity=2.0317461, absoluteUsedCapacity=0.8125, numApps=1, 
> numContainers=6
> 2015-07-02 20:45:31,170 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: 
> assignedContainer queue=root usedCapacity=0.96875 
> absoluteUsedCapacity=0.96875 used= 
> cluster=
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e24_1435849994778_0001_01_14 Container Transitioned from NEW to 
> ALLOCATED
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=dsperf   
> OPERATION=AM Allocated ContainerTARGET=SchedulerApp 
> RESULT=SUCCESS  APPID=application_1435849994778_0001
> CONTAINERID=container_e24_1435849994778_0001_01_14
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: 
> Assigned container container_e24_1435849994778_0001_01_14 of capacity 
>  on host host-10-19-92-117:64318, which has 6 
> containers,  used and  available 
> after allocation
> 2015-07-02 20:45:31,191 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: 
> assignedContainer application 

[jira] [Commented] (YARN-6176) Test whether FS preemption consider child queues over fair share if the parent is under fair share

2017-02-21 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875648#comment-15875648
 ] 

Karthik Kambatla commented on YARN-6176:


I am adding this test in YARN-6210. [~yufeigu] - are you okay with closing this 
as a duplicate of that JIRA? 

> Test whether FS preemption consider child queues over fair share if the 
> parent is under fair share
> --
>
> Key: YARN-6176
> URL: https://issues.apache.org/jira/browse/YARN-6176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6176.001.patch
>
>
> Port the test case in YARN-6151 to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6175) Negative vcore for resource needed to preempt

2017-02-21 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875647#comment-15875647
 ] 

Karthik Kambatla commented on YARN-6175:


In FairScheduler.java, do we still need the Resources.max call? 

> Negative vcore for resource needed to preempt
> -
>
> Key: YARN-6175
> URL: https://issues.apache.org/jira/browse/YARN-6175
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6175.001.patch, YARN-6175.branch-2.8.002.patch
>
>
> Both old preemption code (2.8 and before) and new preemption code could have 
> negative vcores while calculating resources needed to preempt.
> For old preemption, you can find following messages in RM logs:
> {code}
> Should preempt  
> {code}
> The related code is in method {{resourceDeficit()}}. 
> For new preemption code, there are no messages in RM logs, the related code 
> is in method {{fairShareStarvation()}}. 
> The negative value isn't only a display issue, but also may cause missing 
> necessary preemption. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-21 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875636#comment-15875636
 ] 

Varun Saxena commented on YARN-6069:


[~rohithsharma], yeah documentation looks fine to me.
TestYarnConfigurationFields explicitly ignores properties for timeline service 
that's why its not failing.

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch, 
> YARN-6069-YARN-5355.0002.patch, YARN-6069-YARN-5355.0003.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875620#comment-15875620
 ] 

Karthik Kambatla commented on YARN-6210:


Patch (v2) fixes:
# The javadoc warnings
# TestFairScheduler tests around reservation. It was very hard to understand 
the intent of failing tests. Updated the tests based on my understanding. 

> FS: Node reservations can interfere with preemption
> ---
>
> Key: YARN-6210
> URL: https://issues.apache.org/jira/browse/YARN-6210
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6210.1.patch, YARN-6210.2.patch
>
>
> Today, on a saturated cluster, apps with pending demand reserve nodes. A new 
> app might not be able to preempt resources because these nodes are already 
> reserved. This can be reproduced by the example in YARN-6151. 
> Since node reservations are to prevent starvation of apps requesting large 
> containers, triggering these reservations only on starved applications would 
> avoid this situation. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-6210:
---
Attachment: YARN-6210.2.patch

> FS: Node reservations can interfere with preemption
> ---
>
> Key: YARN-6210
> URL: https://issues.apache.org/jira/browse/YARN-6210
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6210.1.patch, YARN-6210.2.patch
>
>
> Today, on a saturated cluster, apps with pending demand reserve nodes. A new 
> app might not be able to preempt resources because these nodes are already 
> reserved. This can be reproduced by the example in YARN-6151. 
> Since node reservations are to prevent starvation of apps requesting large 
> containers, triggering these reservations only on starved applications would 
> avoid this situation. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-6210:
---
Attachment: (was: YARN-6210.2.patch)

> FS: Node reservations can interfere with preemption
> ---
>
> Key: YARN-6210
> URL: https://issues.apache.org/jira/browse/YARN-6210
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6210.1.patch, YARN-6210.2.patch
>
>
> Today, on a saturated cluster, apps with pending demand reserve nodes. A new 
> app might not be able to preempt resources because these nodes are already 
> reserved. This can be reproduced by the example in YARN-6151. 
> Since node reservations are to prevent starvation of apps requesting large 
> containers, triggering these reservations only on starved applications would 
> avoid this situation. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6210) FS: Node reservations can interfere with preemption

2017-02-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-6210:
---
Attachment: YARN-6210.2.patch

> FS: Node reservations can interfere with preemption
> ---
>
> Key: YARN-6210
> URL: https://issues.apache.org/jira/browse/YARN-6210
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: YARN-6210.1.patch, YARN-6210.2.patch
>
>
> Today, on a saturated cluster, apps with pending demand reserve nodes. A new 
> app might not be able to preempt resources because these nodes are already 
> reserved. This can be reproduced by the example in YARN-6151. 
> Since node reservations are to prevent starvation of apps requesting large 
> containers, triggering these reservations only on starved applications would 
> avoid this situation. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6164) Expose maximum-am-resource-percent in YarnClient

2017-02-21 Thread Benson Qiu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benson Qiu updated YARN-6164:
-
Attachment: YARN-6164.005.patch

005: Fix org.apache.hadoop.yarn.api.TestPBImplRecords.testQueueInfoPBImpl

The other failing tests from version 004 pass on my local machine.

> Expose maximum-am-resource-percent in YarnClient
> 
>
> Key: YARN-6164
> URL: https://issues.apache.org/jira/browse/YARN-6164
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Benson Qiu
>Assignee: Benson Qiu
> Attachments: YARN-6164.001.patch, YARN-6164.002.patch, 
> YARN-6164.003.patch, YARN-6164.004.patch, YARN-6164.005.patch
>
>
> `yarn.scheduler.capacity.maximum-am-resource-percent` is exposed through the 
> [Cluster Scheduler 
> API|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Scheduler_API],
>  but not through 
> [YarnClient|https://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/client/api/YarnClient.html].
> Since YarnClient and RM REST APIs depend on different ports (8032 vs 8088 by 
> default), it would be nice to expose `maximum-am-resource-percent` in 
> YarnClient as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6069) CORS support in timeline v2

2017-02-21 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875559#comment-15875559
 ] 

Rohith Sharma K S commented on YARN-6069:
-

bq. Also yarn.timeline-service.http-cross-origin.enabled is not present in 
yarn-default.xml from before. Should it be there?
There are tests to validate yarnconfiguration vs yarn-default.xml. But it is 
surprising that why test is not failing!! We can add this configuration in 
yarn-default.xml. 

For documentation, the below will be added. Are you fine with it?
{code}
To enable cross-origin support (CORS) for the Timeline Service v.2, please set 
the following configuration parameters:

In core-site.xml, add 
org.apache.hadoop.security.HttpCrossOriginFilterInitializer to 
hadoop.http.filter.initializers.   
In yarn-site.xml, set yarn.timeline-service.http-cross-origin.enabled to true. 
For more configurations used for cross-origin support, refer to 
[HttpAuthentication](../../hadoop-project-dist/hadoop-common/HttpAuthentication.html#CORS).
 Please note that yarn.timeline-service.http-cross-origin.enabled, if set to 
true, overrides hadoop.http.cross-origin.enabled.
{code}

> CORS support in timeline v2
> ---
>
> Key: YARN-6069
> URL: https://issues.apache.org/jira/browse/YARN-6069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Sreenath Somarajapuram
>Assignee: Rohith Sharma K S
> Attachments: YARN-6069-YARN-5355.0001.patch, 
> YARN-6069-YARN-5355.0002.patch, YARN-6069-YARN-5355.0003.patch
>
>
> By default the browser prevents accessing resources from multiple domains. In 
> most cases the UIs would be loaded form a domain different from that of  
> timeline server. Hence without CORS support, it would be difficult for the 
> UIs to load data from timeline v2.
> YARN-2277 must provide more info on the implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org