[jira] [Commented] (YARN-5136) Error in handling event type APP_ATTEMPT_REMOVED to the scheduler

2016-09-29 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15531990#comment-15531990
 ] 

Wilfred Spiegelenburg commented on YARN-5136:
-

Hi [~tangshangwen] do you mind if I assign this to myself? I have just run into 
the same issue and would like to provide a fix for this.

> Error in handling event type APP_ATTEMPT_REMOVED to the scheduler
> -
>
> Key: YARN-5136
> URL: https://issues.apache.org/jira/browse/YARN-5136
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: tangshangwen
>Assignee: tangshangwen
>
> move app cause rm exit
> {noformat}
> 2016-05-24 23:20:47,202 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Given app to remove 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt@ea94c3b
>  does not exist in queue [root.bdp_xx.bdp_mart_xx_formal, 
> demand=, running= vCores:13422>, share=, w= weight=1.0>]
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.removeApp(FSLeafQueue.java:119)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeApplicationAttempt(FairScheduler.java:779)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1231)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:114)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:680)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e04_1464073905025_15410_01_001759 Container Transitioned from 
> ACQUIRED to RELEASED
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5136) Error in handling event type APP_ATTEMPT_REMOVED to the scheduler

2016-09-29 Thread tangshangwen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15532199#comment-15532199
 ] 

tangshangwen commented on YARN-5136:


[~wilfreds]ok

> Error in handling event type APP_ATTEMPT_REMOVED to the scheduler
> -
>
> Key: YARN-5136
> URL: https://issues.apache.org/jira/browse/YARN-5136
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: tangshangwen
>Assignee: Wilfred Spiegelenburg
>
> move app cause rm exit
> {noformat}
> 2016-05-24 23:20:47,202 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Given app to remove 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt@ea94c3b
>  does not exist in queue [root.bdp_xx.bdp_mart_xx_formal, 
> demand=, running= vCores:13422>, share=, w= weight=1.0>]
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.removeApp(FSLeafQueue.java:119)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeApplicationAttempt(FairScheduler.java:779)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1231)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:114)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:680)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e04_1464073905025_15410_01_001759 Container Transitioned from 
> ACQUIRED to RELEASED
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5136) Error in handling event type APP_ATTEMPT_REMOVED to the scheduler

2016-09-29 Thread tangshangwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tangshangwen updated YARN-5136:
---
Assignee: Wilfred Spiegelenburg  (was: tangshangwen)

> Error in handling event type APP_ATTEMPT_REMOVED to the scheduler
> -
>
> Key: YARN-5136
> URL: https://issues.apache.org/jira/browse/YARN-5136
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: tangshangwen
>Assignee: Wilfred Spiegelenburg
>
> move app cause rm exit
> {noformat}
> 2016-05-24 23:20:47,202 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Given app to remove 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt@ea94c3b
>  does not exist in queue [root.bdp_xx.bdp_mart_xx_formal, 
> demand=, running= vCores:13422>, share=, w= weight=1.0>]
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.removeApp(FSLeafQueue.java:119)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeApplicationAttempt(FairScheduler.java:779)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1231)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:114)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:680)
> at java.lang.Thread.run(Thread.java:745)
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: 
> container_e04_1464073905025_15410_01_001759 Container Transitioned from 
> ACQUIRED to RELEASED
> 2016-05-24 23:20:47,202 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Exiting, bbye..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-2093) Fair Scheduler IllegalStateException after upgrade from 2.2.0 to 2.4.1-SNAP

2016-09-29 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg reassigned YARN-2093:
---

Assignee: Wilfred Spiegelenburg

> Fair Scheduler IllegalStateException after upgrade from 2.2.0 to 2.4.1-SNAP
> ---
>
> Key: YARN-2093
> URL: https://issues.apache.org/jira/browse/YARN-2093
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Jon Bringhurst
>Assignee: Wilfred Spiegelenburg
>
> After upgrading from 2.2.0 to 2.4.1-SNAP, I ran into the following on startup:
> {noformat}
> 21:19:34,308  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0003_09 State change from SUBMITTED to SCHEDULED
> 21:19:34,309  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_08 State change from SUBMITTED to SCHEDULED
> 21:19:34,310  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0003_10 State change from SUBMITTED to SCHEDULED
> 21:19:34,310  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0003_11 State change from SUBMITTED to SCHEDULED
> 21:19:34,317  INFO FairScheduler:673 - Added Application Attempt 
> appattempt_1400092144371_0004_09 to scheduler from user: 
> samza-perf-playground
> 21:19:34,318  INFO FairScheduler:673 - Added Application Attempt 
> appattempt_1400092144371_0004_10 to scheduler from user: 
> samza-perf-playground
> 21:19:34,318  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_09 State change from SUBMITTED to SCHEDULED
> 21:19:34,318  INFO FairScheduler:733 - Application 
> appattempt_1400092144371_0003_05 is done. finalState=FAILED
> 21:19:34,319  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_10 State change from SUBMITTED to SCHEDULED
> 21:19:34,319  INFO AppSchedulingInfo:108 - Application 
> application_1400092144371_0003 requests cleared
> 21:19:34,319  INFO FairScheduler:673 - Added Application Attempt 
> appattempt_1400092144371_0004_11 to scheduler from user: 
> samza-perf-playground
> 21:19:34,320  INFO FairScheduler:733 - Application 
> appattempt_1400092144371_0003_06 is done. finalState=FAILED
> 21:19:34,320  INFO AppSchedulingInfo:108 - Application 
> application_1400092144371_0003 requests cleared
> 21:19:34,320  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_11 State change from SUBMITTED to SCHEDULED
> 21:19:34,323 FATAL ResourceManager:600 - Error in handling event type 
> APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Given app to remove 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp@429f809d
>  does not exist in queue [root.samza-perf-playground, demand= vCores:0>, running=, share=, 
> w=]
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.removeApp(FSLeafQueue.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeApplicationAttempt(FairScheduler.java:774)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1201)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:122)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:591)
>   at java.lang.Thread.run(Thread.java:744)
> 21:19:34,330  INFO ResourceManager:604 - Exiting, bbye..
> 21:19:34,335  INFO log:67 - Stopped SelectChannelConnector@:8088
> 21:19:34,437  INFO Server:2398 - Stopping server on 8033
> 21:19:34,438  INFO Server:694 - Stopping IPC Server listener on 8033
> {noformat}
> Last commit message for this build is (branch-2.4 on 
> github.com/apache/hadoop-common):
> {noformat}
> commit 09e24d5519187c0db67aacc1992be5d43829aa1e
> Author: Arpit Agarwal 
> Date:   Tue May 20 20:18:46 2014 +
> HADOOP-10562. Fix CHANGES.txt entry again
> 
> git-svn-id: 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2.4@1596389 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-29 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15532511#comment-15532511
 ] 

Rohith Sharma K S commented on YARN-5585:
-

I do still think of concern for introducing new field entityPrefixId rather 
using createdTime in row key.  
# In a distributed cluster,  we can expect source of origin of same entity 
types from different JVM. For example in MR, what if YarnChild's want to 
publish its entities with taskId? How can each yarn child knows about 
entityPrefixId? Only uniqueness in cluster will be timestamp.
# If entityPrefixId is string, then expecting user to provide it with padded 0 
values like 1, 2 etc. It will be a very tedious task for user to decide 
what is the length of padding zeros should be used. In long running service, 
never able to predict how many number of entities can generated.

If we look at the problem , this issue is from storage layer. To solve this, I 
do not feel we need to take solution to the API layer i.e changing 
TimelineEntity object. To unblock this issue in API layer, it would be better 
to go with API change i.e current patch attached.? And storage layer issue 
could be discussed more in other JIRA.

In future, if any other storage is plugged entity prefix would become stale.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2093) Fair Scheduler IllegalStateException after upgrade from 2.2.0 to 2.4.1-SNAP

2016-09-29 Thread Wilfred Spiegelenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15532508#comment-15532508
 ] 

Wilfred Spiegelenburg commented on YARN-2093:
-

This looks like a duplicate of YARN-5136.
I will provide a fix for this through that new jira.

> Fair Scheduler IllegalStateException after upgrade from 2.2.0 to 2.4.1-SNAP
> ---
>
> Key: YARN-2093
> URL: https://issues.apache.org/jira/browse/YARN-2093
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Jon Bringhurst
>Assignee: Wilfred Spiegelenburg
>
> After upgrading from 2.2.0 to 2.4.1-SNAP, I ran into the following on startup:
> {noformat}
> 21:19:34,308  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0003_09 State change from SUBMITTED to SCHEDULED
> 21:19:34,309  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_08 State change from SUBMITTED to SCHEDULED
> 21:19:34,310  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0003_10 State change from SUBMITTED to SCHEDULED
> 21:19:34,310  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0003_11 State change from SUBMITTED to SCHEDULED
> 21:19:34,317  INFO FairScheduler:673 - Added Application Attempt 
> appattempt_1400092144371_0004_09 to scheduler from user: 
> samza-perf-playground
> 21:19:34,318  INFO FairScheduler:673 - Added Application Attempt 
> appattempt_1400092144371_0004_10 to scheduler from user: 
> samza-perf-playground
> 21:19:34,318  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_09 State change from SUBMITTED to SCHEDULED
> 21:19:34,318  INFO FairScheduler:733 - Application 
> appattempt_1400092144371_0003_05 is done. finalState=FAILED
> 21:19:34,319  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_10 State change from SUBMITTED to SCHEDULED
> 21:19:34,319  INFO AppSchedulingInfo:108 - Application 
> application_1400092144371_0003 requests cleared
> 21:19:34,319  INFO FairScheduler:673 - Added Application Attempt 
> appattempt_1400092144371_0004_11 to scheduler from user: 
> samza-perf-playground
> 21:19:34,320  INFO FairScheduler:733 - Application 
> appattempt_1400092144371_0003_06 is done. finalState=FAILED
> 21:19:34,320  INFO AppSchedulingInfo:108 - Application 
> application_1400092144371_0003 requests cleared
> 21:19:34,320  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_11 State change from SUBMITTED to SCHEDULED
> 21:19:34,323 FATAL ResourceManager:600 - Error in handling event type 
> APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Given app to remove 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp@429f809d
>  does not exist in queue [root.samza-perf-playground, demand= vCores:0>, running=, share=, 
> w=]
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.removeApp(FSLeafQueue.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeApplicationAttempt(FairScheduler.java:774)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1201)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:122)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:591)
>   at java.lang.Thread.run(Thread.java:744)
> 21:19:34,330  INFO ResourceManager:604 - Exiting, bbye..
> 21:19:34,335  INFO log:67 - Stopped SelectChannelConnector@:8088
> 21:19:34,437  INFO Server:2398 - Stopping server on 8033
> 21:19:34,438  INFO Server:694 - Stopping IPC Server listener on 8033
> {noformat}
> Last commit message for this build is (branch-2.4 on 
> github.com/apache/hadoop-common):
> {noformat}
> commit 09e24d5519187c0db67aacc1992be5d43829aa1e
> Author: Arpit Agarwal 
> Date:   Tue May 20 20:18:46 2014 +
> HADOOP-10562. Fix CHANGES.txt entry again
> 
> git-svn-id: 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2.4@1596389 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-2093) Fair Scheduler IllegalStateException after upgrade from 2.2.0 to 2.4.1-SNAP

2016-09-29 Thread Wilfred Spiegelenburg (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg resolved YARN-2093.
-
Resolution: Duplicate

> Fair Scheduler IllegalStateException after upgrade from 2.2.0 to 2.4.1-SNAP
> ---
>
> Key: YARN-2093
> URL: https://issues.apache.org/jira/browse/YARN-2093
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Jon Bringhurst
>Assignee: Wilfred Spiegelenburg
>
> After upgrading from 2.2.0 to 2.4.1-SNAP, I ran into the following on startup:
> {noformat}
> 21:19:34,308  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0003_09 State change from SUBMITTED to SCHEDULED
> 21:19:34,309  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_08 State change from SUBMITTED to SCHEDULED
> 21:19:34,310  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0003_10 State change from SUBMITTED to SCHEDULED
> 21:19:34,310  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0003_11 State change from SUBMITTED to SCHEDULED
> 21:19:34,317  INFO FairScheduler:673 - Added Application Attempt 
> appattempt_1400092144371_0004_09 to scheduler from user: 
> samza-perf-playground
> 21:19:34,318  INFO FairScheduler:673 - Added Application Attempt 
> appattempt_1400092144371_0004_10 to scheduler from user: 
> samza-perf-playground
> 21:19:34,318  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_09 State change from SUBMITTED to SCHEDULED
> 21:19:34,318  INFO FairScheduler:733 - Application 
> appattempt_1400092144371_0003_05 is done. finalState=FAILED
> 21:19:34,319  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_10 State change from SUBMITTED to SCHEDULED
> 21:19:34,319  INFO AppSchedulingInfo:108 - Application 
> application_1400092144371_0003 requests cleared
> 21:19:34,319  INFO FairScheduler:673 - Added Application Attempt 
> appattempt_1400092144371_0004_11 to scheduler from user: 
> samza-perf-playground
> 21:19:34,320  INFO FairScheduler:733 - Application 
> appattempt_1400092144371_0003_06 is done. finalState=FAILED
> 21:19:34,320  INFO AppSchedulingInfo:108 - Application 
> application_1400092144371_0003 requests cleared
> 21:19:34,320  INFO RMAppAttemptImpl:659 - 
> appattempt_1400092144371_0004_11 State change from SUBMITTED to SCHEDULED
> 21:19:34,323 FATAL ResourceManager:600 - Error in handling event type 
> APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Given app to remove 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerApp@429f809d
>  does not exist in queue [root.samza-perf-playground, demand= vCores:0>, running=, share=, 
> w=]
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSLeafQueue.removeApp(FSLeafQueue.java:93)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.removeApplicationAttempt(FairScheduler.java:774)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1201)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:122)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:591)
>   at java.lang.Thread.run(Thread.java:744)
> 21:19:34,330  INFO ResourceManager:604 - Exiting, bbye..
> 21:19:34,335  INFO log:67 - Stopped SelectChannelConnector@:8088
> 21:19:34,437  INFO Server:2398 - Stopping server on 8033
> 21:19:34,438  INFO Server:694 - Stopping IPC Server listener on 8033
> {noformat}
> Last commit message for this build is (branch-2.4 on 
> github.com/apache/hadoop-common):
> {noformat}
> commit 09e24d5519187c0db67aacc1992be5d43829aa1e
> Author: Arpit Agarwal 
> Date:   Tue May 20 20:18:46 2014 +
> HADOOP-10562. Fix CHANGES.txt entry again
> 
> git-svn-id: 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2.4@1596389 
> 13f79535-47bb-0310-9956-ffa450edef68
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-29 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15532592#comment-15532592
 ] 

Varun Saxena commented on YARN-5585:


bq. In a distributed cluster, we can expect source of origin of same entity 
types from different JVM. For example in MR, what if YarnChild's want to 
publish its entities with taskId? How can each yarn child knows about 
entityPrefixId? Only uniqueness in cluster will be timestamp.
Frankly, by design, application level entities will be published by AM. Only it 
has access to the collector address and in a secure setup will have access to 
token to publish to collectors. We do not forward this info to containers. AM 
can however forward this information to other processes which can then 
potentially publish entities but if specific AMs' can do that, they can easily 
push the prefix as well. However, task level or its child entities will be 
different and will frankly have their own unique prefix.

bq. If entityPrefixId is string
We were thinking of it to be a long. Intention of prefix is to help get a sort 
order. Numbers can easily achieve that. Haven't reached a conclusion on this 
though. Needs to be further discussed.

bq. If we look at the problem , this issue is from storage layer. 
Frankly we cannot necessarily say ordering is a storage issue as no storage 
would naturally provide a created time sort ordering. Even insertion order is 
not necessary. We had to do some plumbing up even for Level DB and this would 
be even more difficult for HDFS storage. Even for timeline service as a whole 
(irrespective of storage), technically it should be fine if it provides you a 
way to retrieve the entities which you want. 
I understand though entity retrieval by created time sort order, is the most 
common use case. That is why even I was initially of the opinion that we should 
have inherent support for created time ordering. We can go with an index table 
for created time as suggested earlier. But this would incur read side penalty. 
Or we can have created time as part of entity table row key but this would mean 
write side penalty too because you would not know what was the created time of 
the entity supplied. We can however force user to send created time in every 
entity. 
As you were not there in last meeting, your point of view was missing. We can 
revisit this again in today's meeting.
The only way this can be solved at timeline service layer without invoking API 
change is to have another table to assist in retrieval. But this would then 
incur read/write penalties. Can we do something in coprocessor i.e. do 
something in prePut or preScan to support created time use case ? Well I am not 
really aware of the cost incurred due to this so will have to discuss.

bq. In future, if any other storage is plugged entity prefix would become stale.
Maybe or maybe not. They can potentially use it for indexing as well.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-29 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15532764#comment-15532764
 ] 

Sangjin Lee commented on YARN-5585:
---

Thanks [~rohithsharma] for your comments and input!

I'd like to structure the proposal in a way that hopefully answers some of your 
questions and moves this forward.

To me one of the key goals here is to keep writes lean. In other words, we 
would like to avoid write amplifications (no more auxiliary tables or double 
writes). Then it follows that the client would need to provide this entity 
prefix not only when the entity is written for the first time but also *on all 
subsequent updates*.

Providing this entity prefix on all writes and updates may not be practical or 
desired for all cases. I can certainly see that this is not practical for 
YARN-generic entities (e.g. containers). So IMO the *optionality* is a must 
here. If you don't want to have a different sort order than the entity id 
order, you shouldn't be forced to do it.

In terms of what the entity prefix should be if you need it, a strong argument 
can be made for using created time for everyone. However, again, providing the 
created timestamp for all subsequent writes may not be practical. That would 
mean that the AM would need to keep track of the created time for all their 
entities at all times. Perhaps that is trivial for certain AMs, and not for 
others. It's all the more reason to come up with a simple prefix scheme that 
can be easily provided in many situations. For example, if there is a number 
that can be easily computed for your entity, that would be a perfect candidate 
for the entity prefix.

For Tez, if we introduce the entity prefix and you use the created time for 
this, either way it would look exactly the same from the tez perspective. 
Whether we have a more flexible entity prefix or explicit created time (both 
would be in the row key), it would work the same. The client code would do 
either
{code}
entity.setEntityPrefix(createdTime);
client.writeEntity(entity); // pseudo-code
{code}
or
{code}
entity.setCreatedTime(createdTime);
client.writeEntity(entity); // pseudo-code
{code}
The rest of the server code or how data is written, fetched and sorted would 
work in the same manner.

Unfortunately I won't be able to attend today's call as I am away on a 
conference. Hopefully this would help the discussion move forward.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5680) Add 2 new fields in Slider status output - image-name and is-privileged-container

2016-09-29 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi reassigned YARN-5680:


Assignee: Billie Rinaldi

> Add 2 new fields in Slider status output - image-name and 
> is-privileged-container
> -
>
> Key: YARN-5680
> URL: https://issues.apache.org/jira/browse/YARN-5680
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
>
> We need to add 2 new fields in Slider status output for docker provider - 
> image-name and is-privileged-container. The native services REST API needs to 
> expose these 2 attribute values to the end-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5689) Convert native services REST API to use agentless docker provider

2016-09-29 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-5689:


 Summary: Convert native services REST API to use agentless docker 
provider
 Key: YARN-5689
 URL: https://issues.apache.org/jira/browse/YARN-5689
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


The initial version of the native services REST API uses the agent provider. It 
should be converted to use the new docker provider instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5690) Integrate native services modules into maven build

2016-09-29 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-5690:


 Summary: Integrate native services modules into maven build
 Key: YARN-5690
 URL: https://issues.apache.org/jira/browse/YARN-5690
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


The yarn dist assembly should include jars for the new modules as well as their 
new dependencies. We may want to create new lib directories in the tarball for 
the dependencies of the slider-core and services API modules, to avoid adding 
these dependencies into the general YARN classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5691) RM failed Failed to load/recover state due to bad DelegationKey in RM State Store

2016-09-29 Thread Aleksandr Balitsky (JIRA)
Aleksandr Balitsky created YARN-5691:


 Summary: RM failed Failed to load/recover state due to bad 
DelegationKey in RM State Store
 Key: YARN-5691
 URL: https://issues.apache.org/jira/browse/YARN-5691
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.7.3, 2.7.2, 2.7.1, 2.7.0
Reporter: Aleksandr Balitsky
Priority: Minor


RM failed while recovery with the following error:

2016-09-12 21:32:21,999 ERROR 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to 
load/recover state
java.io.EOFException
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at 
org.apache.hadoop.security.token.delegation.DelegationKey.readFields(DelegationKey.java:110)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:346)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1044)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1084)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1221)
2016-09-12 21:32:22,002 INFO org.apache.hadoop.service.AbstractService: Service 
RMActiveServices failed in state STARTED; cause: java.io.EOFException
java.io.EOFException
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at 
org.apache.hadoop.security.token.delegation.DelegationKey.readFields(DelegationKey.java:110)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:346)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1044)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1084)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1221)
2016-09-12 21:32:22,008 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
Stopping ResourceManager metrics system...
2016-09-12 21:32:22,009 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
ResourceManager metrics system stopped.
2016-09-12 21:32:22,009 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: 
ResourceManager metrics system shutdown complete.
2016-09-12 21:32:22,010 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: 
AsyncDispatcher is draining

[jira] [Updated] (YARN-5691) RM failed Failed to load/recover state due to bad DelegationKey in RM State Store

2016-09-29 Thread Aleksandr Balitsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Balitsky updated YARN-5691:
-
Attachment: YARN_5691_v1_001_patch.patch

> RM failed Failed to load/recover state due to bad DelegationKey in RM State 
> Store
> -
>
> Key: YARN-5691
> URL: https://issues.apache.org/jira/browse/YARN-5691
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0, 2.7.1, 2.7.2, 2.7.3
>Reporter: Aleksandr Balitsky
>Priority: Minor
> Attachments: YARN_5691_v1_001_patch.patch
>
>
> RM failed while recovery with the following error:
> 2016-09-12 21:32:21,999 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to 
> load/recover state
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:267)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
> at 
> org.apache.hadoop.security.token.delegation.DelegationKey.readFields(DelegationKey.java:110)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:346)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1044)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1084)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1221)
> 2016-09-12 21:32:22,002 INFO org.apache.hadoop.service.AbstractService: 
> Service RMActiveServices failed in state STARTED; cause: java.io.EOFException
> java.io.EOFException
> at java.io.DataInputStream.readByte(DataInputStream.java:267)
> at 
> org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
> at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
> at 
> org.apache.hadoop.security.token.delegation.DelegationKey.readFields(DelegationKey.java:110)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:346)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1044)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1084)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1221)
> 2016-09-12 21:32:22,008 INFO 
> o

[jira] [Commented] (YARN-5677) RM can be in active-active state for an extended period

2016-09-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15532872#comment-15532872
 ] 

Jian He commented on YARN-5677:
---

bq. Just tested the leader election (with the right property enabled), and it 
works as advertised.
Sorry, didn't get it.. could you clarify what config you set ? which 
LeaderElection class are you testing ?

> RM can be in active-active state for an extended period
> ---
>
> Key: YARN-5677
> URL: https://issues.apache.org/jira/browse/YARN-5677
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5677.001.patch
>
>
> Both branch-2.8/trunk and branch-2.7 have issues when the active RM loses 
> contact with the ZK node(s).
> In branch-2.7, the RM will retry the connection 1000 times by default.  
> Attempting to contact a node which cannot be reached is slow, which means the 
> active can take over an hour to realize it is no longer active.  I clocked it 
> at about an hour and a half in my tests.  The solution appears to be to add 
> some time awareness into the retry loop.
> In branch-2.8/trunk, there is no maximum number of retries that I see.  It 
> appears the connection will be retried forever, with the active never 
> figuring out it's no longer active.  In my testing, the active-active state 
> lasted almost 2 hours with no sign of stopping before I killed it.  The 
> solution appears to be to cap the number of retries or amount of time spent 
> retrying.
> This issue is significant because of the asynchronous nature of job 
> submission.  If the active doesn't know it's not active, it will buffer up 
> job submissions until it finally realizes it has become the standby. Then it 
> will fail all the job submissions in bulk. In high-volume workflows, that 
> behavior can create huge mass job failures.
> This issue is also important because the node managers will not fail over to 
> the new active until the old active realizes it's the standby.  Workloads 
> submitted after the old active loses contact with ZK will therefore fail to 
> be executed regardless of which RM the clients contact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4061) [Fault tolerance] Fault tolerant writer for timeline v2

2016-09-29 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15532914#comment-15532914
 ] 

Joep Rottinghuis commented on YARN-4061:


Potential additional challenge, to serialize mutations (in order to spool to a 
file) we'd likely use MutationSerialization which is in the 
org.apache.hadoop.hbase.mapreduce package, which then contributes to circular 
dependencies (ATS->HBase->Mapreduce->Yarn) cc [~sjlee0]

> [Fault tolerance] Fault tolerant writer for timeline v2
> ---
>
> Key: YARN-4061
> URL: https://issues.apache.org/jira/browse/YARN-4061
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: FaulttolerantwriterforTimelinev2.pdf
>
>
> We need to build a timeline writer that can be resistant to backend storage 
> down time and timeline collector failures. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15532917#comment-15532917
 ] 

Hudson commented on YARN-4205:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10512 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10512/])
YARN-4205. Add a service for monitoring application life time out. (jianhe: rev 
2ae5a3a5bf5ea355370469a53eeccff0b5220081)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/AbstractLivelinessMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/monitor/package-info.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestApplicationLifetimeMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationTimeoutType.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/monitor/RMAppToMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/monitor/RMAppLifetimeMonitor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java


> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205_01.patch, YARN-4205_02.patch, 
> YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-is

[jira] [Commented] (YARN-5677) RM can be in active-active state for an extended period

2016-09-29 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15532916#comment-15532916
 ] 

Daniel Templeton commented on YARN-5677:


curator=false, embedded=false => completely broken
curator=false, embedded=true => allows indefinite active-active state
curator=true, embedded=* => works correctly

> RM can be in active-active state for an extended period
> ---
>
> Key: YARN-5677
> URL: https://issues.apache.org/jira/browse/YARN-5677
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-5677.001.patch
>
>
> Both branch-2.8/trunk and branch-2.7 have issues when the active RM loses 
> contact with the ZK node(s).
> In branch-2.7, the RM will retry the connection 1000 times by default.  
> Attempting to contact a node which cannot be reached is slow, which means the 
> active can take over an hour to realize it is no longer active.  I clocked it 
> at about an hour and a half in my tests.  The solution appears to be to add 
> some time awareness into the retry loop.
> In branch-2.8/trunk, there is no maximum number of retries that I see.  It 
> appears the connection will be retried forever, with the active never 
> figuring out it's no longer active.  In my testing, the active-active state 
> lasted almost 2 hours with no sign of stopping before I killed it.  The 
> solution appears to be to cap the number of retries or amount of time spent 
> retrying.
> This issue is significant because of the asynchronous nature of job 
> submission.  If the active doesn't know it's not active, it will buffer up 
> job submissions until it finally realizes it has become the standby. Then it 
> will fail all the job submissions in bulk. In high-volume workflows, that 
> behavior can create huge mass job failures.
> This issue is also important because the node managers will not fail over to 
> the new active until the old active realizes it's the standby.  Workloads 
> submitted after the old active loses contact with ZK will therefore fail to 
> be executed regardless of which RM the clients contact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5692) Support more application timeout types

2016-09-29 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-5692:
---

 Summary: Support more application timeout types
 Key: YARN-5692
 URL: https://issues.apache.org/jira/browse/YARN-5692
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S


YARN-4205 adds up general framework to monitor application for its timeout. And 
it support for monitoring application for its lifetime i.e monitoring start 
from submitted_time. 

Some users might need to monitor application for different phases. Say, some 
users wants monitor from launched, some are required after registration and so 
on. Specific use cases are mentioned in 
[comment|https://issues.apache.org/jira/browse/YARN-4205?focusedCommentId=15525225&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15525225]
 by [~gsaha] which need to address as part of this JIRA.

This JIRA is open for discussing more use cases and try to solve those.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5692) Support more application timeout types

2016-09-29 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533049#comment-15533049
 ] 

Rohith Sharma K S commented on YARN-5692:
-

Re posting [~gsaha]'s comment from YARN-4205
{code}
YARN-4692 also needs total execution timeout of application. It does not need 
queue or state-store timeouts. The only additional thing it needs, is the 
monitor start time to provide options like LAUNCH_FIRST and LAUNCH_EVERYTIME on 
top of SUBMISSION.
YARN-4692 is for long running or semi long-running services. Long running 
services are meant to run forever. Semi long-running services are meant to run 
for several hours or few days or weeks. For the semi long-running usecases, we 
have applications like a CI (continuous-integration) app or a System Test app, 
which typically needs several hours to finish its job. CI app or System Test 
app owners are not going to be happy that their app did not run at all, even 
though it was given a timeout of 10 hours because YARN got a chance to allocate 
resource to it at the 9:59 hour mark since submission. That is why we need 
LAUNCH_FIRST.
LAUNCH_EVERYTIME will cover scenarios like say a load test app needs to run 
end-to-end for at least 2 days straight to certify a product. If such an app is 
pre-empted at say the 47 hour mark, it needs a fresh 48 hour lifetime the next 
time it is re-launched.
{code}

> Support more application timeout types
> --
>
> Key: YARN-5692
> URL: https://issues.apache.org/jira/browse/YARN-5692
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> YARN-4205 adds up general framework to monitor application for its timeout. 
> And it support for monitoring application for its lifetime i.e monitoring 
> start from submitted_time. 
> Some users might need to monitor application for different phases. Say, some 
> users wants monitor from launched, some are required after registration and 
> so on. Specific use cases are mentioned in 
> [comment|https://issues.apache.org/jira/browse/YARN-4205?focusedCommentId=15525225&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15525225]
>  by [~gsaha] which need to address as part of this JIRA.
> This JIRA is open for discussing more use cases and try to solve those.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533072#comment-15533072
 ] 

Rohith Sharma K S commented on YARN-4205:
-

thank Jian He for review and committing patch.. thanks Sunil, Vinod and Gour 
for reviewing patch.. special thanks to [~nijel] for providing POC patch!! 

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205_01.patch, YARN-4205_02.patch, 
> YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533365#comment-15533365
 ] 

Vrushali C commented on YARN-5585:
--

I believe one way or the other client does need to send in an ordering value, 
whether it is created time or something else. This helps frameworks to develop 
their UI specific queries with much more flexibility and makes the timeline 
service more generic towards all frameworks.

Keeping the writes lean is a good goal to have but not at the cost of incurring 
an extra heavy read penalty for the UI user. If we can easily avoid read 
penalties by temporary write amplifications, that would be much more user 
friendly than having the client wait several extra moments to retrieve data. 
Once the UI becomes slow to respond, it becomes harder to use and, to me, that 
ought to be a more important key focus than avoiding temporary write 
amplifications.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5647) [Security] Collector and reader side changes for loading auth filters and principals

2016-09-29 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5647:
---
Attachment: (was: YARN-5647-YARN-5355.01.patch)

> [Security] Collector and reader side changes for loading auth filters and 
> principals
> 
>
> Key: YARN-5647
> URL: https://issues.apache.org/jira/browse/YARN-5647
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5647) [Security] Collector and reader side changes for loading auth filters and principals

2016-09-29 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5647:
---
Attachment: YARN-5647-YARN-5355.01.patch

> [Security] Collector and reader side changes for loading auth filters and 
> principals
> 
>
> Key: YARN-5647
> URL: https://issues.apache.org/jira/browse/YARN-5647
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5647) [Security] Collector and reader side changes for loading auth filters and principals

2016-09-29 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5647:
---
Attachment: YARN-5647-YARN-5355.wip.01.patch

> [Security] Collector and reader side changes for loading auth filters and 
> principals
> 
>
> Key: YARN-5647
> URL: https://issues.apache.org/jira/browse/YARN-5647
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5647-YARN-5355.wip.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-09-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533450#comment-15533450
 ] 

Vrushali C commented on YARN-5667:
--

Yes, we are considering moving the coprocessor related code into a separate 
package in YARN-4985 (not much progress there yet).

I am also considering moving the coprocessor related functionality (ie 
aggregation) into hbase itself. That way ATS v2 can invoke this aggregation via 
hbase api calls which might make the dependency dags "easier" to manage. But 
that remains TBD, so till then we should proceed with moving the coproc related 
code into a separate package.

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5384:
--
Attachment: YARN-5384.v7.patch

YARN-5384.v7.patch fixes the test failures in TestGreedyReservationAgent.java.

> Expose priority in ReservationSystem submission APIs
> 
>
> Key: YARN-5384
> URL: https://issues.apache.org/jira/browse/YARN-5384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5384.v1.patch, YARN-5384.v2.patch, 
> YARN-5384.v3.patch, YARN-5384.v4.patch, YARN-5384.v5.patch, 
> YARN-5384.v6.patch, YARN-5384.v7.patch
>
>
> YARN-5211 proposes adding support for generalized priorities for reservations 
> in the YARN ReservationSystem. This JIRA is a sub-task to track the changes 
> needed in ApplicationClientProtocol to accomplish it. Please refer to the 
> design doc in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5647) [Security] Collector and reader side changes for loading auth filters and principals

2016-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533715#comment-15533715
 ] 

Hadoop QA commented on YARN-5647:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
52s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 The patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 45s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 54s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830930/YARN-5647-YARN-5355.wip.01.patch
 |
| JIRA Issue | YARN-5647 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5fcd95783935 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 5d7ad39 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13249/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13249/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13249/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [Security] Collector and reader side changes for loading auth filters and 
> principals
> 

[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-29 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533740#comment-15533740
 ] 

Rohith Sharma K S commented on YARN-5585:
-

In weekly sync up meeting, had discussed more on proposed solution. And the 
consensus are 
# Proposal of introducing entityPrefixId remain as it is.
# By default, use createdTime as entityPrefixId.
# For the REST end point, we can support fromEntityPrefixId will become 
combination of entityPrefixId+entityId which can be used for pagination. For 
the first time, user need not worry about entityPrefixId, so he can get list of 
entities. For the second page onwards, use the last entityPrefixId of previous 
out put for retrieving next set of entities.
# The single entity retrieval would become issue if entityPrefixid is not 
known. So, it is required to use SingleColumnValueFilter for reading single 
entities.

[~vrushalic] [~varun_saxena] [~gtCarrera9]  and [~vinodkv] Please feel free to 
add/correct from above points.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3981) support timeline clients not associated with an application

2016-09-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533782#comment-15533782
 ] 

Vrushali C commented on YARN-3981:
--


Can you give an example of what information is to be written at the flow level? 
Is it at the flow level or at flow run level? Put another way, is this 
information going to be stored each time say a hive script is run or is it to 
be written just the very first time it is ever run? The attributes of a flow 
run like start time or end time are determined by the coprocessor automatically 
so those need not be written specially.

If we need to write information that belongs to a particular flow run but not 
tied to a specific application with it, we should write this to the "flow run" 
table, not the "entity table". 

Implementation detail note: the coprocessor is setup for this flow run table, 
so a little more attention needed here to ensure we set/do not the right cell 
tags. 

In order to determine where to have writer processes running and how many and 
how often, how to discover those etc, I think it will be helpful to know what 
kind of information is to be written. 


> support timeline clients not associated with an application
> ---
>
> Key: YARN-3981
> URL: https://issues.apache.org/jira/browse/YARN-3981
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>  Labels: YARN-5355
>
> In the current v.2 design, all timeline writes must belong in a 
> flow/application context (cluster + user + flow + flow run + application).
> But there are use cases that require writing data outside the context of an 
> application. One such example is a higher level client (e.g. tez client or 
> hive/oozie/cascading client) writing flow-level data that spans multiple 
> applications. We need to find a way to support them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533830#comment-15533830
 ] 

Vrushali C commented on YARN-5585:
--

Thanks [~rohithsharma] for the summary.

bq. 2. By default, use createdTime as entityPrefixId.
Also, that means, frameworks which don't want to use the entity id prefix have 
to explicitly specify a null prefix (or a special value that means null).
All the same, it will be really good to mention in the docs for clients that 
they should do the following. 

{code:title=TimelineWriterClient.java}
entity.setEntityPrefix(createdTime);
client.writeEntity(entity); // pseudo-code
{code}

bq. For the REST end point, we can support fromEntityPrefixId will become 
combination of entityPrefixId+entityId which can be used for pagination
I think pagination handling should be more generic than depending on something 
like "fromEntityPrefixId".  REST queries should simply ask for top N records 
with the understanding that the records are returned in sorted order of entity 
prefixes. For the next page of results, the client sends back the last row 
returned's key/entity prefix. For a rest query, if the "startFrom" query param 
is present, the scan starts from "startFrom" prefix value and returns the next 
N such records.




> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5486) Update OpportunisticContainerAllocatorAMService::allocate method to handle OPPORTUNISTIC container requests

2016-09-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533837#comment-15533837
 ] 

Arun Suresh commented on YARN-5486:
---

Thanks for the updating the patch [~kkaranasos].
I have linked the JIRAs you created to this.

+1, will commit this shortly.

> Update OpportunisticContainerAllocatorAMService::allocate method to handle 
> OPPORTUNISTIC container requests
> ---
>
> Key: YARN-5486
> URL: https://issues.apache.org/jira/browse/YARN-5486
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5486.001.patch, YARN-5486.002.patch, 
> YARN-5486.003.patch, YARN-5486.004.patch
>
>
> YARN-5457 refactors the Distributed Scheduling framework to move the 
> container allocator to yarn-server-common.
> This JIRA proposes to update the allocate method in the new AM service to use 
> the OpportunisticContainerAllocator to allocate opportunistic containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-29 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15533830#comment-15533830
 ] 

Vrushali C edited comment on YARN-5585 at 9/29/16 7:40 PM:
---

Thanks [~rohithsharma] for the summary.

bq. 2. By default, use createdTime as entityPrefixId.

Also, that means, frameworks which don't want to use the entity id prefix have 
to explicitly specify a null prefix (or a special value that means null).
All the same, it will be really good to mention in the docs for clients that 
they should do the following. 

{code:title=TimelineWriterClient.java}
entity.setEntityPrefix(createdTime);
client.writeEntity(entity); // pseudo-code
{code}


bq. For the REST end point, we can support fromEntityPrefixId will become 
combination of entityPrefixId+entityId which can be used for pagination

I think pagination handling should be more generic than depending on something 
like "fromEntityPrefixId".  REST queries should simply ask for top N records 
with the understanding that the records are returned in sorted order of entity 
prefixes. For the next page of results, the client sends back the last row 
returned's key/entity prefix. For a rest query, if the "startFrom" query param 
is present, the scan starts from "startFrom" prefix value and returns the next 
N such records.





was (Author: vrushalic):
Thanks [~rohithsharma] for the summary.

bq. 2. By default, use createdTime as entityPrefixId.
Also, that means, frameworks which don't want to use the entity id prefix have 
to explicitly specify a null prefix (or a special value that means null).
All the same, it will be really good to mention in the docs for clients that 
they should do the following. 

{code:title=TimelineWriterClient.java}
entity.setEntityPrefix(createdTime);
client.writeEntity(entity); // pseudo-code
{code}

bq. For the REST end point, we can support fromEntityPrefixId will become 
combination of entityPrefixId+entityId which can be used for pagination
I think pagination handling should be more generic than depending on something 
like "fromEntityPrefixId".  REST queries should simply ask for top N records 
with the understanding that the records are returned in sorted order of entity 
prefixes. For the next page of results, the client sends back the last row 
returned's key/entity prefix. For a rest query, if the "startFrom" query param 
is present, the scan starts from "startFrom" prefix value and returns the next 
N such records.




> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&&fromId=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534007#comment-15534007
 ] 

Hadoop QA commented on YARN-5384:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
55s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 31 unchanged - 0 fixed = 32 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 25s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 13s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830946/YARN-5384.v7.patch |
| JIRA Issue | YARN-5384 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall 

[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534056#comment-15534056
 ] 

Chris Nauroth commented on YARN-4205:
-

I think this patch broke compilation on branch-2.

{code}
  private Map monitoredApps =
  new HashMap();
{code}

{code}
monitoredApps.putIfAbsent(appToMonitor, timeout);
{code}

[{{Map#putIfAbsent}}|https://docs.oracle.com/javase/8/docs/api/java/util/Map.html#putIfAbsent-K-V-]
 was added in JDK 1.8, but we want to be able to compile branch-2 for JDK 1.7.

Can someone please take a look?

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205_01.patch, YARN-4205_02.patch, 
> YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5384:
--
Attachment: YARN-5384.v8.patch

YARN-5384.v8.patch removes checkstyle issues introduced by YARN-5384.v7.patch.

> Expose priority in ReservationSystem submission APIs
> 
>
> Key: YARN-5384
> URL: https://issues.apache.org/jira/browse/YARN-5384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5384.v1.patch, YARN-5384.v2.patch, 
> YARN-5384.v3.patch, YARN-5384.v4.patch, YARN-5384.v5.patch, 
> YARN-5384.v6.patch, YARN-5384.v7.patch, YARN-5384.v8.patch
>
>
> YARN-5211 proposes adding support for generalized priorities for reservations 
> in the YARN ReservationSystem. This JIRA is a sub-task to track the changes 
> needed in ApplicationClientProtocol to accomplish it. Please refer to the 
> design doc in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-4205:

Attachment: YARN-4205-addendum.001.patch

Upload an addendum patch to expand the (non-synchronized) putIfAbsent method to 
unblock branch-2 builds. 

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205-addendum.001.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5325) Stateless ARMRMProxy policies implementation

2016-09-29 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5325:
---
Attachment: YARN-5325-YARN-2915.07.patch

> Stateless ARMRMProxy policies implementation
> 
>
> Key: YARN-5325
> URL: https://issues.apache.org/jira/browse/YARN-5325
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5325-YARN-2915.05.patch, 
> YARN-5325-YARN-2915.06.patch, YARN-5325-YARN-2915.07.patch, 
> YARN-5325.01.patch, YARN-5325.02.patch, YARN-5325.03.patch, YARN-5325.04.patch
>
>
> This JIRA tracks policies in the AMRMProxy that decide how to forward 
> ResourceRequests, without maintaining substantial state across decissions 
> (e.g., broadcast).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-09-29 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534218#comment-15534218
 ] 

Haibo Chen commented on YARN-5667:
--

Thanks [~vrushalic] for the info! I will then leave the moving of coprocessor 
to YARN-4985.  As discussed in the meeting, I am planning to extract all 
HBase-related code from the core ATS into a separate module first. Then we can 
continue to extract the coprocessor code as a follow-up. Per previous 
discussion with Sangjin, the [coprocessor ---> HBase client code in ATS] 
dependency is not necessary. But looking at the code, the coprocessor does need 
to depend on the HBase table definition code, so I figure if it is OK to 
extract the schema code out and then have the coprocessor and 
HBaseWriter/Reader both depend on the schema module.  We can always defer the 
discussion to YARN-4985 when we get there, though.

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534232#comment-15534232
 ] 

Chris Nauroth commented on YARN-4205:
-

+1 for the addendum patch.  [~gtCarrera9], thank you.

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205-addendum.001.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534247#comment-15534247
 ] 

Li Lu commented on YARN-4205:
-

[~cnauroth] shall we put this patch to branch-2 only, or to trunk as well? 

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205-addendum.001.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534277#comment-15534277
 ] 

Chris Nauroth commented on YARN-4205:
-

[~gtCarrera9], we are all clear to use JDK 8 features in trunk, so I think 
committing only to branch-2 is fine.

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205-addendum.001.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-09-29 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534283#comment-15534283
 ] 

Gour Saha commented on YARN-5610:
-

[~jianhe] addressing your 3rd set of comments.

{quote}
I would prefer rename as such pair STARTED -> READAY, or RUNNING -> READY
{quote}
Makes sense. Renamed RUNNING -> READY. Also removed FINISHED as you suggested 
before.

{quote}
Do you mean swagger has such a date type in string format ? which one is this? 
I couldn't find it in swagger documentation.
{quote}
No, I meant that our REST API yaml document specifies the expected and 
supported format of lifetime to be like "30mins, 10hours, 5days".

{quote}
I meant what is this change used for ?
{noformat}
  
http://git-wip-us.apache.org/repos/asf/hadoop.git

  scm:git:http://git-wip-us.apache.org/repos/asf/hadoop.git


  scm:git:http://git-wip-us.apache.org/repos/asf/hadoop.git

  
{noformat}
{quote}
The buildnumber-maven-plugin needs the scm url to be defined. Hence I had to 
add those lines. Anyway I removed the buildnumber-maven-plugin for now, and 
removed the scm section as well.

{quote}
I see. Then the   if (uniqueGlobalPropertyCache == null) condition is not 
needed, because uniqueGlobalPropertyCache is initialized as not null.
{noformat}
private void addOptionsIfNotPresent(List options,
Set uniqueGlobalPropertyCache, String key, String value) {
  if (uniqueGlobalPropertyCache == null) {
options.addAll(Arrays.asList(key, value));
  }
{noformat}
{quote}
In the current call to addOptionsIfNotPresent, uniqueGlobalPropertyCache is 
initialized. But there could be other callers later, who might not initialize 
it. The method has 2 different code paths - when uniqueGlobalPropertyCache is 
null (caller does not want to use cache) vs when uniqueGlobalPropertyCache is 
not-null (caller wants to use cache).

{quote}
Then, what are the other parameters in Component used for in this case ? like 
number_of_containers, launch_command, resource etc.
{quote}
They are not required for external APPLICATION type.


> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534288#comment-15534288
 ] 

Li Lu commented on YARN-4205:
-

OK, I'll commit it shortly then. 

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205-addendum.001.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534307#comment-15534307
 ] 

Li Lu commented on YARN-4205:
-

I committed the addendum patch to branch-2. Thanks [~cnauroth] for the review. 
[~jianhe] [~rohithsharma] if there's any concerns with this fix, please feel 
free to post a new one. Thanks! 

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205-addendum.001.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5486) Update OpportunisticContainerAllocatorAMService::allocate method to handle OPPORTUNISTIC container requests

2016-09-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534322#comment-15534322
 ] 

Hudson commented on YARN-5486:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10516 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10516/])
YARN-5486. Update OpportunisticContainerAllocatorAMService::allocate (arun 
suresh: rev 10be45986cdf86a89055065b752959bd6369d54f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/OpportunisticContainerAllocatorAMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestOpportunisticContainerAllocatorAMService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/DefaultRequestInterceptor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/OpportunisticContainerAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/scheduler/DistributedScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/distributed/NodeQueueLoadMonitor.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestOpportunisticContainerAllocation.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/OpportunisticContainerContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java


> Update OpportunisticContainerAllocatorAMService::allocate method to handle 
> OPPORTUNISTIC container requests
> ---
>
> Key: YARN-5486
> URL: https://issues.apache.org/jira/browse/YARN-5486
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5486.001.patch, YARN-5486.002.patch, 
> YARN-5486.003.patch, YARN-5486.004.patch
>
>
> YARN-5457 refactors the Distributed Scheduling framework to move the 
> container allocator to yarn-server-common.
> This JIRA proposes to update the allocate method in the new AM service to use 
> the OpportunisticContainerAllocator to allocate opportunistic containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5325) Stateless ARMRMProxy policies implementation

2016-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534370#comment-15534370
 ] 

Hadoop QA commented on YARN-5325:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
4s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
13s {color} | {color:green} The patch generated 0 new + 74 unchanged - 1 fixed 
= 74 total (was 75) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 28s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831016/YARN-5325-YARN-2915.07.patch
 |
| JIRA Issue | YARN-5325 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  shellcheck  shelldocs  |
| uname | Linux 977462d48cb3 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 6f91338 |
| Default Java | 1.8.0_101 |
| shellcheck | v0.4.4 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13251/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13251/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Con

[jira] [Comment Edited] (YARN-5486) Update OpportunisticContainerAllocatorAMService::allocate method to handle OPPORTUNISTIC container requests

2016-09-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534387#comment-15534387
 ] 

Arun Suresh edited comment on YARN-5486 at 9/29/16 11:29 PM:
-

Thanks for the patch [~kkaranasos] and for the reviews [~subru]..
Committed this to trunk.
Will commit to branch-2 once YARN-5688 is fixed


was (Author: asuresh):
Thanks for the patch [~kkaranasos] and for the reviews [~subru]..
Committed this to trunk

> Update OpportunisticContainerAllocatorAMService::allocate method to handle 
> OPPORTUNISTIC container requests
> ---
>
> Key: YARN-5486
> URL: https://issues.apache.org/jira/browse/YARN-5486
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5486.001.patch, YARN-5486.002.patch, 
> YARN-5486.003.patch, YARN-5486.004.patch
>
>
> YARN-5457 refactors the Distributed Scheduling framework to move the 
> container allocator to yarn-server-common.
> This JIRA proposes to update the allocate method in the new AM service to use 
> the OpportunisticContainerAllocator to allocate opportunistic containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-09-29 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534459#comment-15534459
 ] 

Gour Saha commented on YARN-5610:
-

You are right. If it is set from the script, then it is not required in the 
java code. Added it to the script run_rest_service.sh as well.

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534555#comment-15534555
 ] 

Hadoop QA commented on YARN-5384:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 4s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 7s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831001/YARN-5384.v8.patch |
| JIRA Issue | YARN-5384 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 702e84931121 3.13.0

[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-09-29 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534657#comment-15534657
 ] 

Gour Saha commented on YARN-5610:
-

{quote}
One more thing, let's throw this code away in this patch. IIUC, without the 
agent code, the original slider lifetime code is not going to work any way.
{quote}
Agreed and removed.

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5610) Initial code for native services REST API

2016-09-29 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534459#comment-15534459
 ] 

Gour Saha edited comment on YARN-5610 at 9/30/16 1:09 AM:
--

You are right. If it is set from the script, then it is not required in the 
java code. Removed it from java code and added to the script 
run_rest_service.sh.


was (Author: gsaha):
You are right. If it is set from the script, then it is not required in the 
java code. Added it to the script run_rest_service.sh as well.

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5610) Initial code for native services REST API

2016-09-29 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5610:

Attachment: YARN-5610-yarn-native-services.003.patch

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch, 
> YARN-5610-yarn-native-services.003.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5610) Initial code for native services REST API

2016-09-29 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5610:

Fix Version/s: yarn-native-services

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch, 
> YARN-5610-yarn-native-services.003.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5675) Checkin swagger definition in the repo

2016-09-29 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-5675:

Fix Version/s: yarn-native-services

> Checkin swagger definition in the repo
> --
>
> Key: YARN-5675
> URL: https://issues.apache.org/jira/browse/YARN-5675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Fix For: yarn-native-services
>
>
> This task will be used to submit the REST API swagger definition (yaml 
> format) to be checked in to the repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534737#comment-15534737
 ] 

Subru Krishnan commented on YARN-5384:
--

Thanks [~seanpo03] for addressing my comments. 

The latest patch is very close, have a few minor suggestions:
  * In YARN {{Priority}} is a inverse value, i.e. lower the value higher the 
absolute Priority. Can you kindly update the docs accordingly.
  * This statement is not fully correct:
bq. Note that a recurring reservation will implicitly have the highest possible 
priority
Recurring reservations are always higher priority than non-recurring ones and 
we compare Priority within each group as described above.
  * We should use *Priority.UNDEFINED* if Priority is not specified.
  * All the API changes related to Priority should be marked _unstable_. Can 
you also update the ones related to recurrence and name as they are currently 
incorrectly annotated.


> Expose priority in ReservationSystem submission APIs
> 
>
> Key: YARN-5384
> URL: https://issues.apache.org/jira/browse/YARN-5384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5384.v1.patch, YARN-5384.v2.patch, 
> YARN-5384.v3.patch, YARN-5384.v4.patch, YARN-5384.v5.patch, 
> YARN-5384.v6.patch, YARN-5384.v7.patch, YARN-5384.v8.patch
>
>
> YARN-5211 proposes adding support for generalized priorities for reservations 
> in the YARN ReservationSystem. This JIRA is a sub-task to track the changes 
> needed in ApplicationClientProtocol to accomplish it. Please refer to the 
> design doc in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5683) Support specifying storage type for per-application local dirs

2016-09-29 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-5683:
---
Description: 
h3.  Introduction
* Some applications of various frameworks (Flink, Spark and MapReduce etc) 
using local storage (checkpoint, shuffle etc) might require high IO 
performance. It's useful to allocate local directories to high performance 
storage media for these applications on heterogeneous clusters.
* YARN does not distinguish different storage types and hence applications 
cannot selectively use storage media with different performance 
characteristics. Adding awareness of storage media can allow YARN to make 
better decisions about the placement of local directories.

h3.  Approach
* NodeManager will distinguish storage types for local directories.
** yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs configuration 
should allow the cluster administrator to optionally specify the storage type 
for each local directories. Example: 
[SSD]/disk1/nm-local-dir,/disk2/nm-local-dir,/disk3/nm-local-dir (equals to 
[SSD]/disk1/nm-local-dir,[DISK]/disk2/nm-local-dir,[DISK]/disk3/nm-local-dir)
** StorageType defines DISK/SSD storage types and takes DISK as the default 
storage type. 
** StorageLocation separates storage type and directory path, used by 
LocalDirAllocator to aware the types of local dirs, the default storage type is 
DISK.
** getLocalPathForWrite method of LocalDirAllcator will prefer to choose the 
local directory of the specified storage type, and will fallback to not care 
storage type if the requirement can not be satisfied.
** Support for container related local/log directories by ContainerLaunch. All 
application frameworks can set the environment variables (LOCAL_STORAGE_TYPE 
and LOG_STORAGE_TYPE) to specified the desired storage type of local/log 
directories.
* Allow specified storage type for various frameworks (Take MapReduce as an 
example)
** Add new configurations should allow application administrator to optionally 
specify the storage type of local/log directories. (MapReduce add 
configurations: mapreduce.job.local-storage-type and 
mapreduce.job.log-storage-type)
** Support for container work directories. Set the environment variables 
includes LOCAL_STORAGE_TYPE and LOG_STORAGE_TYPE according to configurations 
above for ContainerLaunchContext and ApplicationSubmissionContext. (MapReduce 
should update YARNRunner and TaskAttemptImpl)
** Add storage type prefix for request path to support for other local 
directories of frameworks (such as shuffle directories for MapReduce). 
(MapReduce should update YarnOutputFiles, MROutputFiles and YarnChild to 
support for output/work directories)
** Flow diagram for MapReduce framework
!flow_diagram_for_MapReduce.png!

h3.  Further Discussion
* The requirement of storage type for local/log directories may not be 
satisfied on heterogeneous clusters. To achieve global optimum, scheduler 
should aware and manage disk resources. 
[YARN-2139|https://issues.apache.org/jira/browse/YARN-2139] is close to that 
but seems not support multiple storage types, maybe we should do even more to 
aware the storage type of disk resource?
* Node labels or node constraints 
([YARN-3409|https://issues.apache.org/jira/browse/YARN-3409]) can also make a 
higher chance to satisfy the requirement of specified storage type.
* Fallback strategy still needs to be concerned. Certain applications might not 
work well when the requirement of storage type is not satisfied. When none of 
desired storage type disk are available, should container launching be failed? 
let AM handle?

This feature has been used for half a year to meet the needs of some 
applications on Alibaba search clusters.
Please feel free to give your suggestions and opinions.

  was:
h3.  Introduction
* Some applications of various frameworks (Flink, Spark and MapReduce etc) 
using local storage (checkpoint, shuffle etc) might require high IO 
performance. It's useful to allocate local directories to high performance 
storage media for these applications on heterogeneous clusters.
* YARN does not distinguish different storage types and hence applications 
cannot selectively use storage media with different performance 
characteristics. Adding awareness of storage media can allow YARN to make 
better decisions about the placement of local directories.

h3.  Approach
* NodeManager will distinguish storage types for local directories.
** yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs configuration 
should allow the cluster administrator to optionally specify the storage type 
for each local directories. Example: 
[SSD]/disk1/nm-local-dir,/disk2/nm-local-dir,/disk3/nm-local-dir (equals to 
[SSD]/disk1/nm-local-dir,[DISK]/disk2/nm-local-dir,[DISK]/disk3/nm-local-dir)
** StorageType defines DISK/SSD storage types and takes DISK as the default 
storage type. 
** StorageLocation separates storag

[jira] [Updated] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5384:
--
Attachment: YARN-5384.v9.patch

YARN-5384.v9.patch addresses all your comments [~subru]. Thanks!

> Expose priority in ReservationSystem submission APIs
> 
>
> Key: YARN-5384
> URL: https://issues.apache.org/jira/browse/YARN-5384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5384.v1.patch, YARN-5384.v2.patch, 
> YARN-5384.v3.patch, YARN-5384.v4.patch, YARN-5384.v5.patch, 
> YARN-5384.v6.patch, YARN-5384.v7.patch, YARN-5384.v8.patch, YARN-5384.v9.patch
>
>
> YARN-5211 proposes adding support for generalized priorities for reservations 
> in the YARN ReservationSystem. This JIRA is a sub-task to track the changes 
> needed in ApplicationClientProtocol to accomplish it. Please refer to the 
> design doc in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534852#comment-15534852
 ] 

Rohith Sharma K S commented on YARN-4205:
-

+1 for addendum patch.. Thanks [~gtCarrera9] for quick response.

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205-addendum.001.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5525) Make log aggregation service class configurable

2016-09-29 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534903#comment-15534903
 ] 

Subru Krishnan commented on YARN-5525:
--

Thanks [~botong] for your response. 

bq. One reason that we also want to customize LogAggregationService is to allow 
a different root log directory to upload logs to per app

I want to make sure I grok your scenario fully (and also ensure we are not over 
engineering :)) - do you have other concrete reasons than the one above for the 
proposal? The reason I am asking is I feel it'll be more easier to extend the 
existing {{LogAggregationService}} to allow specifying per-app root dirs than 
overriding the entire service.

> Make log aggregation service class configurable
> ---
>
> Key: YARN-5525
> URL: https://issues.apache.org/jira/browse/YARN-5525
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Reporter: Giovanni Matteo Fumarola
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-5525.v1.patch, YARN-5525.v2.patch, 
> YARN-5525.v3.patch
>
>
> Make the log aggregation class configurable and extensible, so that 
> alternative log aggregation behaviors like app specific log aggregation 
> directory, log aggregation format can be implemented and plugged in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5384:
--
Attachment: YARN-5384.v10.patch

YARN-5384.v10.patch replaces all occurences of the default 
Priority.newInstance(0) with Priority.UNDEFINED.

> Expose priority in ReservationSystem submission APIs
> 
>
> Key: YARN-5384
> URL: https://issues.apache.org/jira/browse/YARN-5384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5384.v1.patch, YARN-5384.v10.patch, 
> YARN-5384.v2.patch, YARN-5384.v3.patch, YARN-5384.v4.patch, 
> YARN-5384.v5.patch, YARN-5384.v6.patch, YARN-5384.v7.patch, 
> YARN-5384.v8.patch, YARN-5384.v9.patch
>
>
> YARN-5211 proposes adding support for generalized priorities for reservations 
> in the YARN ReservationSystem. This JIRA is a sub-task to track the changes 
> needed in ApplicationClientProtocol to accomplish it. Please refer to the 
> design doc in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Sean Po (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534925#comment-15534925
 ] 

Sean Po edited comment on YARN-5384 at 9/30/16 3:47 AM:


YARN-5384.v10.patch replaces all occurrences of the default 
Priority.newInstance(0) with Priority.UNDEFINED.


was (Author: seanpo03):
YARN-5384.v10.patch replaces all occurences of the default 
Priority.newInstance(0) with Priority.UNDEFINED.

> Expose priority in ReservationSystem submission APIs
> 
>
> Key: YARN-5384
> URL: https://issues.apache.org/jira/browse/YARN-5384
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5384.v1.patch, YARN-5384.v10.patch, 
> YARN-5384.v2.patch, YARN-5384.v3.patch, YARN-5384.v4.patch, 
> YARN-5384.v5.patch, YARN-5384.v6.patch, YARN-5384.v7.patch, 
> YARN-5384.v8.patch, YARN-5384.v9.patch
>
>
> YARN-5211 proposes adding support for generalized priorities for reservations 
> in the YARN ReservationSystem. This JIRA is a sub-task to track the changes 
> needed in ApplicationClientProtocol to accomplish it. Please refer to the 
> design doc in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15534953#comment-15534953
 ] 

Hadoop QA commented on YARN-5610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 1s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 37s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
9s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 55s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
34s {color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} yarn-native-services passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-yarn-applications in yarn-native-services failed. 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 36s 
{color} | {color:red} root: The patch generated 242 new + 0 unchanged - 0 fixed 
= 242 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 47s 
{color} | {color:red} hadoop-yarn-applications in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-services-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 13s 
{color} | {color:red} The patch generated 15 new + 77 unchanged - 0 fixed = 92 
total (was 77) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch has 7 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-yarn-applications in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-api
 generated 119 new + 0 unchanged - 0 fixed = 119 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:gr

[jira] [Commented] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535005#comment-15535005
 ] 

Hadoop QA commented on YARN-5384:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
51s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 31 unchanged - 0 fixed = 32 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 55s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831050/YA

[jira] [Commented] (YARN-4205) Add a service for monitoring application life time out

2016-09-29 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535013#comment-15535013
 ] 

Jian He commented on YARN-4205:
---

[~cnauroth], [~gtCarrera9],  thanks for helping on this ! my bad.

> Add a service for monitoring application life time out
> --
>
> Key: YARN-4205
> URL: https://issues.apache.org/jira/browse/YARN-4205
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: nijel
>Assignee: Rohith Sharma K S
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4205.patch, 0002-YARN-4205.patch, 
> 0003-YARN-4205.patch, 0004-YARN-4205.patch, 0005-YARN-4205.patch, 
> 0006-YARN-4205.patch, 0007-YARN-4205.1.patch, 0007-YARN-4205.2.patch, 
> 0007-YARN-4205.patch, YARN-4205-addendum.001.patch, YARN-4205_01.patch, 
> YARN-4205_02.patch, YARN-4205_03.patch
>
>
> This JIRA intend to provide a lifetime monitor service. 
> The service will monitor the applications where the life time is configured. 
> If the application is running beyond the lifetime, it will be killed. 
> The lifetime will be considered from the submit time.
> The thread monitoring interval is configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5384) Expose priority in ReservationSystem submission APIs

2016-09-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535038#comment-15535038
 ] 

Hadoop QA commented on YARN-5384:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 31 unchanged - 0 fixed = 32 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 24s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 19s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12831056/YARN-5384.v10.patch |
| JIRA Issue | YARN-5384 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall

[jira] [Commented] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-09-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535174#comment-15535174
 ] 

Sunil G commented on YARN-5375:
---

+1 for state-store approach..

With this approach, we can now make sure that a target state is reached for 
scheduler or state-store. So overall we could improve on test case duration 
too. I havent looked patch in detail, will do that soon. Thank You.

> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch, YARN-5375.06.patch, 
> YARN-5375.07-drain-statestore.patch, YARN-5375.07-sync-statestore.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4855) Should check if node exists when replace nodelabels

2016-09-29 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535177#comment-15535177
 ] 

Sunil G commented on YARN-4855:
---

I also generally agree with Wangda's approach. Improving to use cliParser 
option could be tracked separately.

> Should check if node exists when replace nodelabels
> ---
>
> Key: YARN-4855
> URL: https://issues.apache.org/jira/browse/YARN-4855
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: YARN-4855.001.patch, YARN-4855.002.patch, 
> YARN-4855.003.patch, YARN-4855.004.patch, YARN-4855.005.patch, 
> YARN-4855.006.patch, YARN-4855.007.patch, YARN-4855.008.patch, 
> YARN-4855.009.patch, YARN-4855.010.patch, YARN-4855.011.patch, 
> YARN-4855.012.patch
>
>
> Today when we add nodelabels to nodes, it would succeed even if nodes are not 
> existing NodeManger in cluster without any message.
> It could be like this:
> When we use *yarn rmadmin -replaceLabelsOnNode --fail-on-unkown-nodes 
> "node1=label1"* , it would be denied if node is unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org