[jira] [Updated] (YARN-5242) Update DominantResourceCalculator to consider all resource types in calculations

2016-07-06 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-5242:

Attachment: YARN-5242-YARN-3926.003.patch

Fair point [~sunilg]. Uploaded a new patch that fixes the behaviour.

> Update DominantResourceCalculator to consider all resource types in 
> calculations
> 
>
> Key: YARN-5242
> URL: https://issues.apache.org/jira/browse/YARN-5242
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-5242-YARN-3926.001.patch, 
> YARN-5242-YARN-3926.002.patch, YARN-5242-YARN-3926.003.patch
>
>
> The fitsIn function in the DominantResourceCalculator only looks at memory 
> and cpu. It should be modified to use all available resource types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5315) Standby RM keep sending start am container request to NM

2016-07-06 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365654#comment-15365654
 ] 

Rohith Sharma K S commented on YARN-5315:
-

bq. awaitTerminatioin will block until all tasks terminate, this may delay the 
stop process of other service, should we do that?
I think, timed awaitTerminatioin can be done for 60 seconds.

> Standby RM keep sending start am container request to NM
> 
>
> Key: YARN-5315
> URL: https://issues.apache.org/jira/browse/YARN-5315
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-5315.01.patch
>
>
> 1, network partitions, RM couldn't connect to NMs and start AM request pending
> 2, RM becomes standby, int ApplicatioinMasterLauncher#serviceStop, 
> launcherPool are shutdown. the launching thread are interrupted, but start AM 
> request may still left in Queue
> 3,network reconnect,  standby RM sends start AM request to NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-4948) Support node labels store in zookeeper

2016-07-06 Thread jialei weng (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jialei weng reopened YARN-4948:
---

need make the trunk take this change

> Support node labels store in zookeeper
> --
>
> Key: YARN-4948
> URL: https://issues.apache.org/jira/browse/YARN-4948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: jialei weng
>Assignee: jialei weng
> Attachments: YARN-4948.001.patch, YARN-4948.002.patch, 
> YARN-4948.003.patch
>
>
> Support node labels store in zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4948) Support node labels store in zookeeper

2016-07-06 Thread jialei weng (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365648#comment-15365648
 ] 

jialei weng edited comment on YARN-4948 at 7/7/16 5:48 AM:
---

Hi, [~leftnoteasy] adn [~Naganarasimha], what is the progress to bring this 
patch to trunk? i mean make the trunk take this change.


was (Author: wjlei):
Hi, [~leftnoteasy] adn [~Naganarasimha] what is the progress to bring this 
patch to trunk? i mean make the trunk take this change.

> Support node labels store in zookeeper
> --
>
> Key: YARN-4948
> URL: https://issues.apache.org/jira/browse/YARN-4948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: jialei weng
>Assignee: jialei weng
> Attachments: YARN-4948.001.patch, YARN-4948.002.patch, 
> YARN-4948.003.patch
>
>
> Support node labels store in zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4948) Support node labels store in zookeeper

2016-07-06 Thread jialei weng (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365648#comment-15365648
 ] 

jialei weng edited comment on YARN-4948 at 7/7/16 5:48 AM:
---

Hi, [~leftnoteasy] adn [~Naganarasimha] what is the progress to bring this 
patch to trunk? i mean make the trunk take this change.


was (Author: wjlei):
Hi, [~leftnoteasy], what is the progress to bring this patch to trunk? i mean 
make the trunk take this change.

> Support node labels store in zookeeper
> --
>
> Key: YARN-4948
> URL: https://issues.apache.org/jira/browse/YARN-4948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: jialei weng
>Assignee: jialei weng
> Attachments: YARN-4948.001.patch, YARN-4948.002.patch, 
> YARN-4948.003.patch
>
>
> Support node labels store in zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4948) Support node labels store in zookeeper

2016-07-06 Thread jialei weng (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365648#comment-15365648
 ] 

jialei weng commented on YARN-4948:
---

Hi, [~leftnoteasy], what is the progress to bring this patch to trunk? i mean 
make the trunk take this change.

> Support node labels store in zookeeper
> --
>
> Key: YARN-4948
> URL: https://issues.apache.org/jira/browse/YARN-4948
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: jialei weng
>Assignee: jialei weng
> Attachments: YARN-4948.001.patch, YARN-4948.002.patch, 
> YARN-4948.003.patch
>
>
> Support node labels store in zookeeper



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5320) [YARN-3368] Add resource usage by applications and queues to cluster overview page.

2016-07-06 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365638#comment-15365638
 ] 

Weiwei Yang commented on YARN-5320:
---

Hello [~leftnoteasy] Do you think it would be good if we can have a resource 
usage overview per user/group ?

> [YARN-3368] Add resource usage by applications and queues to cluster overview 
> page.
> ---
>
> Key: YARN-5320
> URL: https://issues.apache.org/jira/browse/YARN-5320
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> With this, we can get understanding about which application / queue is 
> consuming most resource in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365553#comment-15365553
 ] 

Hadoop QA commented on YARN-5200:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 58s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 12 
new + 91 unchanged - 13 fixed = 103 total (was 104) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 1s {color} | 
{color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.cli.TestLogsCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816551/YARN-5200.8.patch |
| JIRA Issue | YARN-5200 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cd6c922421a9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a3f93be |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12213/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/12213/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12213/artifact/patchprocess/patch-unit-h

[jira] [Updated] (YARN-5200) Improve yarn logs to get Container List

2016-07-06 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5200:

Attachment: YARN-5200.8.patch

rebase the patch based on the latest trunk

> Improve yarn logs to get Container List
> ---
>
> Key: YARN-5200
> URL: https://issues.apache.org/jira/browse/YARN-5200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5200.1.patch, YARN-5200.2.patch, YARN-5200.3.patch, 
> YARN-5200.4.patch, YARN-5200.5.patch, YARN-5200.6.patch, YARN-5200.7.patch, 
> YARN-5200.8.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-06 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365526#comment-15365526
 ] 

Weiwei Yang commented on YARN-5309:
---

Hello Thomas

I think you need MAPREDUCE-6618 as well to fix this issue for hive, completely. 
Please let me know how it works. Thanks a lot.

> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver, yarn
>Affects Versions: 2.7.1
>Reporter: Thomas Friedrich
>Assignee: Weiwei Yang
> Attachments: YARN-5309.001.patch, YARN-5309.002.patch
>
>
> We found a similar issue as HADOOP-11368 in TimelineClientImpl. The class 
> creates an instance of SSLFactory in newSslConnConfigurator and subsequently 
> creates the ReloadingX509TrustManager instance which in turn starts a trust 
> store reloader thread. 
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> This problem was observed by a customer who had SSL enabled in Hadoop and 
> submitted many queries against the HiveServer2. After a few days, the HS2 
> instance crashed and from the Java dump we could see many (over 13000) 
> threads like this:
> "Truststore reloader thread" #126 daemon prio=5 os_prio=0 
> tid=0x7f680d2e3000 nid=0x98fd waiting on 
> condition [0x7f67e482c000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run
> (ReloadingX509TrustManager.java:225)
> at java.lang.Thread.run(Thread.java:745)
> HiveServer2 uses the JobClient to submit a job:
> Thread [HiveServer2-Background-Pool: Thread-188] (Suspended (breakpoint at 
> line 89 in 
> ReloadingX509TrustManager))   
>   owns: Object  (id=464)  
>   owns: Object  (id=465)  
>   owns: Object  (id=466)  
>   owns: ServiceLoader  (id=210)
>   ReloadingX509TrustManager.(String, String, String, long) line: 89 
>   FileBasedKeyStoresFactory.init(SSLFactory$Mode) line: 209   
>   SSLFactory.init() line: 131 
>   TimelineClientImpl.newSslConnConfigurator(int, Configuration) line: 532 
>   TimelineClientImpl.newConnConfigurator(Configuration) line: 507 
>   TimelineClientImpl.serviceInit(Configuration) line: 269 
>   TimelineClientImpl(AbstractService).init(Configuration) line: 163   
>   YarnClientImpl.serviceInit(Configuration) line: 169 
>   YarnClientImpl(AbstractService).init(Configuration) line: 163   
>   ResourceMgrDelegate.serviceInit(Configuration) line: 102
>   ResourceMgrDelegate(AbstractService).init(Configuration) line: 163  
>   ResourceMgrDelegate.(YarnConfiguration) line: 96  
>   YARNRunner.(Configuration) line: 112  
>   YarnClientProtocolProvider.create(Configuration) line: 34   
>   Cluster.initialize(InetSocketAddress, Configuration) line: 95   
>   Cluster.(InetSocketAddress, Configuration) line: 82   
>   Cluster.(Configuration) line: 75  
>   JobClient.init(JobConf) line: 475   
>   JobClient.(JobConf) line: 454 
>   MapRedTask(ExecDriver).execute(DriverContext) line: 401 
>   MapRedTask.execute(DriverContext) line: 137 
>   MapRedTask(Task).executeTask() line: 160 
>   TaskRunner.runSequential() line: 88 
>   Driver.launchTask(Task, String, boolean, String, int, 
> DriverContext) line: 1653   
>   Driver.execute() line: 1412 
> For every job, a new instance of JobClient/YarnClientImpl/TimelineClientImpl 
> is created. But because the HS2 process stays up for days, the previous trust 
> store reloader threads are still hanging around in the HS2 process and 
> eventually use all the resources available. 
> It seems like a similar fix as HADOOP-11368 is needed in TimelineClientImpl 
> but it doesn't have a destroy method to begin with. 
> One option to avoid this problem is to disable the yarn timeline service 
> (yarn.timeline-service.enabled=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-07-06 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365512#comment-15365512
 ] 

Inigo Goiri commented on YARN-5215:
---

[~kasha], for the node utilization, in Windows it's pretty fast and I haven't 
found any issues.
My guess is that in Linux it should be pretty fast too as it's checking single 
values in /proc and not going through the whole tree.

Regarding disk and network, the node monitoring is already in trunk for both 
Windows and Linux.
The utilization it's not sent to the RM but I have that pending in YARN-2965. I 
can push for that this week.

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-07-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365506#comment-15365506
 ] 

Karthik Kambatla commented on YARN-5215:


YARN-1011 and I assume YARN-5202 primarily target using those resources that 
have been allocated to other containers but not used. I see the value in 
extending this to all unused resources on the node, especially if we can 
release resources immediately in case of resource contention.

My concern is with aggressively scheduling non-YARN resources *without* 
immediate preemption in case of resource contention. It might also be nice to 
have a way for other (white-listed) processes to actively reclaim resources 
from YARN. May be, the preemption code could be shared between this and 
YARN-1011? 

[~elgoiri] - do you know how long it takes to compute node utilization and if 
there is need to improve that too? 

If we look only at cpu and memory utilization, may be we could oversubscribe on 
disk/network. Any chance we could get the node-level utilization for 
disk/network from Tetris work? [~asuresh], [~srikanthkandula]? 

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5327) API changes required to support recurring reservations in the YARN ReservationSystem

2016-07-06 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5327:
-
Description: YARN-5326 proposes adding native support for recurring 
reservations in the YARN ReservationSystem. This JIRA is a sub-task to track 
the changes needed in ApplicationClientProtocol to accomplish it. Please refer 
to the design doc in the parent JIRA for details.  (was: YARN-5326 proposes 
adding native support for recurring reservations in the YARN ReservationSystem. 
This JIRA is a sub-task to track the changes in ApplicationClientProtocol to 
accomplish it. Please refer to the design doc in the parent JIRA for details.)

> API changes required to support recurring reservations in the YARN 
> ReservationSystem
> 
>
> Key: YARN-5327
> URL: https://issues.apache.org/jira/browse/YARN-5327
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes needed 
> in ApplicationClientProtocol to accomplish it. Please refer to the design doc 
> in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5331) Extend RLESparseResourceAllocation with period for supporting recurring reservations in YARN ReservationSystem

2016-07-06 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5331:


 Summary: Extend RLESparseResourceAllocation with period for 
supporting recurring reservations in YARN ReservationSystem
 Key: YARN-5331
 URL: https://issues.apache.org/jira/browse/YARN-5331
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Subru Krishnan
Assignee: Sangeetha Abdu Jyothi


YARN-5326 proposes adding native support for recurring reservations in the YARN 
ReservationSystem. This JIRA is a sub-task to add a 
PeriodicRLESparseResourceAllocation. Please refer to the design doc in the 
parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5330) SharingPolicy enhancements required to support recurring reservations in the YARN ReservationSystem

2016-07-06 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5330:


 Summary: SharingPolicy enhancements required to support recurring 
reservations in the YARN ReservationSystem
 Key: YARN-5330
 URL: https://issues.apache.org/jira/browse/YARN-5330
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Subru Krishnan
Assignee: Sangeetha Abdu Jyothi


YARN-5326 proposes adding native support for recurring reservations in the YARN 
ReservationSystem. This JIRA is a sub-task to track the changes required in 
SharingPolicy to accomplish it. Please refer to the design doc in the parent 
JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5329) ReservationAgent enhancements required to support recurring reservations in the YARN ReservationSystem

2016-07-06 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5329:


 Summary: ReservationAgent enhancements required to support 
recurring reservations in the YARN ReservationSystem
 Key: YARN-5329
 URL: https://issues.apache.org/jira/browse/YARN-5329
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Subru Krishnan
Assignee: Sangeetha Abdu Jyothi


YARN-5326 proposes adding native support for recurring reservations in the YARN 
ReservationSystem. This JIRA is a sub-task to track the changes required in 
ReservationAgent to accomplish it. Please refer to the design doc in the parent 
JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5328) InMemoryPlan enhancements required to support recurring reservations in the YARN ReservationSystem

2016-07-06 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5328:


 Summary: InMemoryPlan enhancements required to support recurring 
reservations in the YARN ReservationSystem
 Key: YARN-5328
 URL: https://issues.apache.org/jira/browse/YARN-5328
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Subru Krishnan
Assignee: Sangeetha Abdu Jyothi


YARN-5326 proposes adding native support for recurring reservations in the YARN 
ReservationSystem. This JIRA is a sub-task to track the changes required in 
InMemoryPlan to accomplish it. Please refer to the design doc in the parent 
JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5327) API changes required to support recurring reservations in the YARN ReservationSystem

2016-07-06 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5327:


 Summary: API changes required to support recurring reservations in 
the YARN ReservationSystem
 Key: YARN-5327
 URL: https://issues.apache.org/jira/browse/YARN-5327
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Subru Krishnan
Assignee: Sangeetha Abdu Jyothi


YARN-5326 proposes adding native support for recurring reservations in the YARN 
ReservationSystem. This JIRA is a sub-task to track the changes in 
ApplicationClientProtocol to accomplish it. Please refer to the design doc in 
the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5326) Add support for recurring reservations in the YARN ReservationSystem

2016-07-06 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5326:
-
Description: YARN-1051 introduced a ReservationSytem that enables the YARN 
RM to handle time explicitly, i.e. users can now "reserve" capacity ahead of 
time which is predictably allocated to them. Most SLA jobs/workflows are 
recurring so they need the same resources periodically. With the current 
implementation, users will have to make individual reservations for each run. 
This is an umbrella JIRA to enhance the reservation system by adding native 
support for recurring reservations.  (was: YARN-1051 introduced a 
ReservationSytem that enables the YARN RM to handle time explicitly, i.e. users 
can now "reserve" capacity ahead of time which is predictably allocated to 
them. Most SLA jobs are recurring so they need the same resources periodically. 
With the current implementation, users will have to make individual 
reservations for each run. This is an umbrella JIRA to enhance the reservation 
system by adding native support for recurring reservations.)

> Add support for recurring reservations in the YARN ReservationSystem
> 
>
> Key: YARN-5326
> URL: https://issues.apache.org/jira/browse/YARN-5326
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Subru Krishnan
> Attachments: SupportRecurringReservationsInRayon.pdf
>
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. Most SLA jobs/workflows are recurring so they 
> need the same resources periodically. With the current implementation, users 
> will have to make individual reservations for each run. This is an umbrella 
> JIRA to enhance the reservation system by adding native support for recurring 
> reservations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5326) Add support for recurring reservations in the YARN ReservationSystem

2016-07-06 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5326:
-
Attachment: SupportRecurringReservationsInRayon.pdf

Attaching a design doc with our proposal to enhance ReservationSytem with 
native support for recurring jobs/workflows.

> Add support for recurring reservations in the YARN ReservationSystem
> 
>
> Key: YARN-5326
> URL: https://issues.apache.org/jira/browse/YARN-5326
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Subru Krishnan
> Attachments: SupportRecurringReservationsInRayon.pdf
>
>
> YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle 
> time explicitly, i.e. users can now "reserve" capacity ahead of time which is 
> predictably allocated to them. Most SLA jobs are recurring so they need the 
> same resources periodically. With the current implementation, users will have 
> to make individual reservations for each run. This is an umbrella JIRA to 
> enhance the reservation system by adding native support for recurring 
> reservations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5326) Add support for recurring reservations in the YARN ReservationSystem

2016-07-06 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5326:


 Summary: Add support for recurring reservations in the YARN 
ReservationSystem
 Key: YARN-5326
 URL: https://issues.apache.org/jira/browse/YARN-5326
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager
Reporter: Subru Krishnan


YARN-1051 introduced a ReservationSytem that enables the YARN RM to handle time 
explicitly, i.e. users can now "reserve" capacity ahead of time which is 
predictably allocated to them. Most SLA jobs are recurring so they need the 
same resources periodically. With the current implementation, users will have 
to make individual reservations for each run. This is an umbrella JIRA to 
enhance the reservation system by adding native support for recurring 
reservations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5302) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token II

2016-07-06 Thread Xianyin Xin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365461#comment-15365461
 ] 

Xianyin Xin commented on YARN-5302:
---

Thanks [~varun_saxena]. Will upload a new patch as soon as possible.

> Yarn Application log Aggreagation fails due to NM can not get correct HDFS 
> delegation token II
> --
>
> Key: YARN-5302
> URL: https://issues.apache.org/jira/browse/YARN-5302
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
> Attachments: YARN-5032.001.patch, YARN-5032.002.patch, 
> YARN-5302.003.patch, YARN-5302.004.patch
>
>
> Different with YARN-5098, this happens at NM side. When NM recovers, 
> credentials are read from NMStateStore. When initialize app aggregators, 
> exception happens because of the overdue tokens. The app is a long running 
> service.
> {code:title=LogAggregationService.java}
>   protected void initAppAggregator(final ApplicationId appId, String user,
>   Credentials credentials, ContainerLogsRetentionPolicy 
> logRetentionPolicy,
>   Map appAcls,
>   LogAggregationContext logAggregationContext) {
> // Get user's FileSystem credentials
> final UserGroupInformation userUgi =
> UserGroupInformation.createRemoteUser(user);
> if (credentials != null) {
>   userUgi.addCredentials(credentials);
> }
>...
> try {
>   // Create the app dir
>   createAppDir(user, appId, userUgi);
> } catch (Exception e) {
>   appLogAggregator.disableLogAggregation();
>   if (!(e instanceof YarnRuntimeException)) {
> appDirException = new YarnRuntimeException(e);
>   } else {
> appDirException = (YarnRuntimeException)e;
>   }
>   appLogAggregators.remove(appId);
>   closeFileSystems(userUgi);
>   throw appDirException;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5309) SSLFactory truststore reloader thread leak in TimelineClientImpl

2016-07-06 Thread Thomas Friedrich (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365451#comment-15365451
 ] 

Thomas Friedrich commented on YARN-5309:


Hi [~cheersyang], Hive still depends on an older version of Hadoop that doesn't 
have the JobClient with AutoCloseable interface. In addition I found that 
without MAPREDUCE-6618 calling close on the JobClient won't do anything either 
(and MAPREDUCE-6618 is only part of Hadoop 2.6.4 and 2.7.3). And when 
debugging, I found that Hive didn't call close on the JobClient to begin with. 
I will test a newer Hadoop with your patch and additional changes in Hive to 
confirm that your patch works for Hive. Then open another Hive JIRA linking to 
this one for the Hive changes.

> SSLFactory truststore reloader thread leak in TimelineClientImpl
> 
>
> Key: YARN-5309
> URL: https://issues.apache.org/jira/browse/YARN-5309
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver, yarn
>Affects Versions: 2.7.1
>Reporter: Thomas Friedrich
>Assignee: Weiwei Yang
> Attachments: YARN-5309.001.patch, YARN-5309.002.patch
>
>
> We found a similar issue as HADOOP-11368 in TimelineClientImpl. The class 
> creates an instance of SSLFactory in newSslConnConfigurator and subsequently 
> creates the ReloadingX509TrustManager instance which in turn starts a trust 
> store reloader thread. 
> However, the SSLFactory is never destroyed and hence the trust store reloader 
> threads are not killed.
> This problem was observed by a customer who had SSL enabled in Hadoop and 
> submitted many queries against the HiveServer2. After a few days, the HS2 
> instance crashed and from the Java dump we could see many (over 13000) 
> threads like this:
> "Truststore reloader thread" #126 daemon prio=5 os_prio=0 
> tid=0x7f680d2e3000 nid=0x98fd waiting on 
> condition [0x7f67e482c000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run
> (ReloadingX509TrustManager.java:225)
> at java.lang.Thread.run(Thread.java:745)
> HiveServer2 uses the JobClient to submit a job:
> Thread [HiveServer2-Background-Pool: Thread-188] (Suspended (breakpoint at 
> line 89 in 
> ReloadingX509TrustManager))   
>   owns: Object  (id=464)  
>   owns: Object  (id=465)  
>   owns: Object  (id=466)  
>   owns: ServiceLoader  (id=210)
>   ReloadingX509TrustManager.(String, String, String, long) line: 89 
>   FileBasedKeyStoresFactory.init(SSLFactory$Mode) line: 209   
>   SSLFactory.init() line: 131 
>   TimelineClientImpl.newSslConnConfigurator(int, Configuration) line: 532 
>   TimelineClientImpl.newConnConfigurator(Configuration) line: 507 
>   TimelineClientImpl.serviceInit(Configuration) line: 269 
>   TimelineClientImpl(AbstractService).init(Configuration) line: 163   
>   YarnClientImpl.serviceInit(Configuration) line: 169 
>   YarnClientImpl(AbstractService).init(Configuration) line: 163   
>   ResourceMgrDelegate.serviceInit(Configuration) line: 102
>   ResourceMgrDelegate(AbstractService).init(Configuration) line: 163  
>   ResourceMgrDelegate.(YarnConfiguration) line: 96  
>   YARNRunner.(Configuration) line: 112  
>   YarnClientProtocolProvider.create(Configuration) line: 34   
>   Cluster.initialize(InetSocketAddress, Configuration) line: 95   
>   Cluster.(InetSocketAddress, Configuration) line: 82   
>   Cluster.(Configuration) line: 75  
>   JobClient.init(JobConf) line: 475   
>   JobClient.(JobConf) line: 454 
>   MapRedTask(ExecDriver).execute(DriverContext) line: 401 
>   MapRedTask.execute(DriverContext) line: 137 
>   MapRedTask(Task).executeTask() line: 160 
>   TaskRunner.runSequential() line: 88 
>   Driver.launchTask(Task, String, boolean, String, int, 
> DriverContext) line: 1653   
>   Driver.execute() line: 1412 
> For every job, a new instance of JobClient/YarnClientImpl/TimelineClientImpl 
> is created. But because the HS2 process stays up for days, the previous trust 
> store reloader threads are still hanging around in the HS2 process and 
> eventually use all the resources available. 
> It seems like a similar fix as HADOOP-11368 is needed in TimelineClientImpl 
> but it doesn't have a destroy method to begin with. 
> One option to avoid this problem is to disable the yarn timeline service 
> (yarn.timeline-service.enabled=false).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues

[jira] [Commented] (YARN-5270) Solve miscellaneous issues caused by YARN-4844

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365408#comment-15365408
 ] 

Hadoop QA commented on YARN-5270:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 37s 
{color} | {color:red} root: The patch generated 14 new + 1249 unchanged - 18 
fixed = 1263 total (was 1267) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 9s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 55s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 11s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 13s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s 
{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 43s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 113m 46s 
{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 31s {color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 

[jira] [Commented] (YARN-5233) Support for specifying a path for ATS plugin jars

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365373#comment-15365373
 ] 

Hudson commented on YARN-5233:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10057/])
YARN-5233. Support for specifying a path for ATS plugin jars. (jianhe: rev 
8a9d293dd60f6d51e1574e412d40746ba8175fe1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/EntityGroupFSTimelineStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/TestEntityGroupFSTimelineStore.java


> Support for specifying a path for ATS plugin jars
> -
>
> Key: YARN-5233
> URL: https://issues.apache.org/jira/browse/YARN-5233
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5233-trunk.001.patch, YARN-5233-trunk.002.patch, 
> YARN-5233-trunk.003.patch
>
>
> Third-party plugins need to add their jars to ATS. Most of the times, 
> isolation is not needed. However, there needs to be a way to specify the 
> path. For now, the jars on that path can be added to default classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5233) Support for specifying a path for ATS plugin jars

2016-07-06 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5233:
--
Affects Version/s: (was: 2.8.0)
   2.9.0
 Target Version/s: 2.9.0

Committed to trunk and branch-2. thanks Li !

> Support for specifying a path for ATS plugin jars
> -
>
> Key: YARN-5233
> URL: https://issues.apache.org/jira/browse/YARN-5233
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.9.0
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5233-trunk.001.patch, YARN-5233-trunk.002.patch, 
> YARN-5233-trunk.003.patch
>
>
> Third-party plugins need to add their jars to ATS. Most of the times, 
> isolation is not needed. However, there needs to be a way to specify the 
> path. For now, the jars on that path can be added to default classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application

2016-07-06 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365349#comment-15365349
 ] 

Vinod Kumar Vavilapalli commented on YARN-5224:
---

Note that for the "yarn logs" CLI to work with finished containers of a running 
applications, it needs to get container->node mapping which is only available 
if generic history / AHS is enabled and also has the per-container information..

> Logs for a completed container are not available in the yarn logs output for 
> a live application
> ---
>
> Key: YARN-5224
> URL: https://issues.apache.org/jira/browse/YARN-5224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, 
> YARN-5224.4.patch, YARN-5224.5.patch, YARN-5224.6.patch, 
> YARN-5224.6.trunk.patch, YARN-5224.7.trunk.patch, YARN-5224.8.branch-2.patch, 
> YARN-5224.8.trunk.patch
>
>
> This affects 'short' jobs like MapReduce and Tez more than long running apps.
> Related: YARN-5193 (but that only covers long running apps)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5325) Stateless ARMRMProxy policies implementation

2016-07-06 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5325:
---
Description: This JIRA tracks policies in the AMRMProxy that decide how to 
forward ResourceRequests, without maintaining substantial state across 
decissions (e.g., broadcast).

> Stateless ARMRMProxy policies implementation
> 
>
> Key: YARN-5325
> URL: https://issues.apache.org/jira/browse/YARN-5325
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>
> This JIRA tracks policies in the AMRMProxy that decide how to forward 
> ResourceRequests, without maintaining substantial state across decissions 
> (e.g., broadcast).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5324) Stateless router policies implementation

2016-07-06 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5324:
---
Description: These are policies at the Router that do not require maintaing 
state across choices (e.g., weighted random).

> Stateless router policies implementation
> 
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-07-06 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5323:
---
Description: This JIRA tracks APIs for the policies that will guide the 
Router and AMRMProxy decisions on where to fwd the jobs submission/query 
requests as well as ResourceRequests.

> Policies APIs (for Router and AMRMProxy policies)
> -
>
> Key: YARN-5323
> URL: https://issues.apache.org/jira/browse/YARN-5323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>
> This JIRA tracks APIs for the policies that will guide the Router and 
> AMRMProxy decisions on where to fwd the jobs submission/query requests as 
> well as ResourceRequests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5325) Stateless ARMRMProxy policies implementation

2016-07-06 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-5325:
--

 Summary: Stateless ARMRMProxy policies implementation
 Key: YARN-5325
 URL: https://issues.apache.org/jira/browse/YARN-5325
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino
Assignee: Carlo Curino






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5294) Pass remote ip address down to YarnAuthorizationProvider

2016-07-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365343#comment-15365343
 ] 

Wangda Tan commented on YARN-5294:
--

Failed tests are tracked by: YARN-5318/YARN-5037/YARN-5317.

Committing the patch to branch-2.

> Pass remote ip address down to YarnAuthorizationProvider
> 
>
> Key: YARN-5294
> URL: https://issues.apache.org/jira/browse/YARN-5294
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5294-branch-2.patch, YARN-5294.1.patch, 
> YARN-5294.2.patch, YARN-5294.3.patch, YARN-5294.4.patch, YARN-5294.5.patch, 
> YARN-5294.6.patch
>
>
> Pass down the remote ip address down to authorizer. Underlying authorizer 
> implementation can make use of this information for its own authorization 
> rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5324) Stateless router policies implementation

2016-07-06 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-5324:
--

 Summary: Stateless router policies implementation
 Key: YARN-5324
 URL: https://issues.apache.org/jira/browse/YARN-5324
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino
Assignee: Carlo Curino






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5323) Policies APIs (for Router and AMRMProxy policies)

2016-07-06 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-5323:
--

 Summary: Policies APIs (for Router and AMRMProxy policies)
 Key: YARN-5323
 URL: https://issues.apache.org/jira/browse/YARN-5323
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino
Assignee: Carlo Curino






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5322) [YARN-3368] Add a node heat chart map

2016-07-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5322:
-
Attachment: sample-1.png

> [YARN-3368] Add a node heat chart map
> -
>
> Key: YARN-5322
> URL: https://issues.apache.org/jira/browse/YARN-5322
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: sample-1.png
>
>
> With this we can easier figure out hotspot in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5322) [YARN-3368] Add a node heat chart map

2016-07-06 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5322:


 Summary: [YARN-3368] Add a node heat chart map
 Key: YARN-5322
 URL: https://issues.apache.org/jira/browse/YARN-5322
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


With this we can easier figure out hotspot in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5320) [YARN-3368] Add resource usage by applications and queues to cluster overview page.

2016-07-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5320:
-
Summary: [YARN-3368] Add resource usage by applications and queues to 
cluster overview page.  (was: Add resource usage by applications and queues to 
cluster overview page.)

> [YARN-3368] Add resource usage by applications and queues to cluster overview 
> page.
> ---
>
> Key: YARN-5320
> URL: https://issues.apache.org/jira/browse/YARN-5320
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> With this, we can get understanding about which application / queue is 
> consuming most resource in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5321) [YARN-3368] Add resource usage for application by node managers

2016-07-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5321:
-
Summary: [YARN-3368] Add resource usage for application by node managers  
(was: Add resource usage for application by node managers)

> [YARN-3368] Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5321) Add resource usage for application by node managers

2016-07-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365337#comment-15365337
 ] 

Wangda Tan commented on YARN-5321:
--

Attached sample screenshot.

> Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5321) Add resource usage for application by node managers

2016-07-06 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5321:


 Summary: Add resource usage for application by node managers
 Key: YARN-5321
 URL: https://issues.apache.org/jira/browse/YARN-5321
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan
 Attachments: sample-1.png

With this, user can understand distribution of resources allocated to this 
application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5321) Add resource usage for application by node managers

2016-07-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5321:
-
Attachment: sample-1.png

> Add resource usage for application by node managers
> ---
>
> Key: YARN-5321
> URL: https://issues.apache.org/jira/browse/YARN-5321
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: sample-1.png
>
>
> With this, user can understand distribution of resources allocated to this 
> application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5320) Add resource usage by applications and queues to cluster overview page.

2016-07-06 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5320:


 Summary: Add resource usage by applications and queues to cluster 
overview page.
 Key: YARN-5320
 URL: https://issues.apache.org/jira/browse/YARN-5320
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


With this, we can get understanding about which application / queue is 
consuming most resource in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5037) TestRMRestart#testQueueMetricsOnRMRestart random faiure

2016-07-06 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee reassigned YARN-5037:
--

Assignee: sandflee

> TestRMRestart#testQueueMetricsOnRMRestart random faiure
> ---
>
> Key: YARN-5037
> URL: https://issues.apache.org/jira/browse/YARN-5037
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: sandflee
>Assignee: sandflee
>
> https://builds.apache.org/job/PreCommit-YARN-Build/11326/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.7.0_95.txt
> {noformat}
> Tests run: 28, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 67.159 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart
> testQueueMetricsOnRMRestart(org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart)
>   Time elapsed: 2.561 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.assertQueueMetrics(TestRMRestart.java:1874)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testQueueMetricsOnRMRestart(TestRMRestart.java:1849)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4939) the decommissioning Node should keep alive if NM restart

2016-07-06 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365317#comment-15365317
 ] 

sandflee edited comment on YARN-4939 at 7/6/16 11:16 PM:
-

Thanks [~djp], the test failure seems not related, could run pass locally, file 
YARN-5317,YARN-5318 to track


was (Author: sandflee):
Thanks [~djp], the test seems not related, could run pass locally, file 
YARN-5317,YARN-5318 to track

> the decommissioning Node should keep alive  if NM restart
> -
>
> Key: YARN-4939
> URL: https://issues.apache.org/jira/browse/YARN-4939
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-4939.01.patch, YARN-4939.02.patch, 
> YARN-4939.03.patch, YARN-4939.04.patch, YARN-4939.05.patch
>
>
> 1, gracefully decommission a node A
> 2, restart node A
> 3, node A could not register to RM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4939) the decommissioning Node should keep alive if NM restart

2016-07-06 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365317#comment-15365317
 ] 

sandflee commented on YARN-4939:


Thanks [~djp], the test seems not related, could run pass locally, file 
YARN-5317,YARN-5318 to track

> the decommissioning Node should keep alive  if NM restart
> -
>
> Key: YARN-4939
> URL: https://issues.apache.org/jira/browse/YARN-4939
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-4939.01.patch, YARN-4939.02.patch, 
> YARN-4939.03.patch, YARN-4939.04.patch, YARN-4939.05.patch
>
>
> 1, gracefully decommission a node A
> 2, restart node A
> 3, node A could not register to RM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5319) testRefreshNodesResourceWithFileSystemBasedConfigurationProvider may fail

2016-07-06 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee resolved YARN-5319.

Resolution: Duplicate

> testRefreshNodesResourceWithFileSystemBasedConfigurationProvider may fail
> -
>
> Key: YARN-5319
> URL: https://issues.apache.org/jira/browse/YARN-5319
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: sandflee
>Priority: Minor
>
> org.junit.ComparisonFailure: expected:<> but 
> was:<>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRefreshNodesResourceWithFileSystemBasedConfigurationProvider(TestRMAdminService.java:238)
> https://builds.apache.org/job/PreCommit-YARN-Build/12204/testReport/org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager/TestAMRestart/testAMRestartNotLostContainerCompleteMsg/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5319) testRefreshNodesResourceWithFileSystemBasedConfigurationProvider may fail

2016-07-06 Thread sandflee (JIRA)
sandflee created YARN-5319:
--

 Summary: 
testRefreshNodesResourceWithFileSystemBasedConfigurationProvider may fail
 Key: YARN-5319
 URL: https://issues.apache.org/jira/browse/YARN-5319
 Project: Hadoop YARN
  Issue Type: Test
Reporter: sandflee
Priority: Minor


org.junit.ComparisonFailure: expected:<> but 
was:<>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRefreshNodesResourceWithFileSystemBasedConfigurationProvider(TestRMAdminService.java:238)

https://builds.apache.org/job/PreCommit-YARN-Build/12204/testReport/org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager/TestAMRestart/testAMRestartNotLostContainerCompleteMsg/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5318) testRefreshNodesResourceWithFileSystemBasedConfigurationProvider may fail

2016-07-06 Thread sandflee (JIRA)
sandflee created YARN-5318:
--

 Summary: 
testRefreshNodesResourceWithFileSystemBasedConfigurationProvider may fail
 Key: YARN-5318
 URL: https://issues.apache.org/jira/browse/YARN-5318
 Project: Hadoop YARN
  Issue Type: Test
Reporter: sandflee
Priority: Minor


org.junit.ComparisonFailure: expected:<> but 
was:<>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRefreshNodesResourceWithFileSystemBasedConfigurationProvider(TestRMAdminService.java:238)

https://builds.apache.org/job/PreCommit-YARN-Build/12204/testReport/org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager/TestAMRestart/testAMRestartNotLostContainerCompleteMsg/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5317) testAMRestartNotLostContainerCompleteMsg may fail

2016-07-06 Thread sandflee (JIRA)
sandflee created YARN-5317:
--

 Summary: testAMRestartNotLostContainerCompleteMsg may fail
 Key: YARN-5317
 URL: https://issues.apache.org/jira/browse/YARN-5317
 Project: Hadoop YARN
  Issue Type: Test
Reporter: sandflee
Assignee: sandflee
Priority: Minor


java.lang.Exception: test timed out after 3 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:261)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:225)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:207)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.sendAMLaunched(MockRM.java:746)
at 
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testAMRestartNotLostContainerCompleteMsg(TestAMRestart.java:841)

see 
https://builds.apache.org/job/PreCommit-YARN-Build/12204/testReport/org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager/TestAMRestart/testAMRestartNotLostContainerCompleteMsg/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365290#comment-15365290
 ] 

Sangjin Lee commented on YARN-5229:
---

Oh OK. Thanks for the clarification!

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5283) Refactor container assignment into AbstractYarnScheduler#assignContainers

2016-07-06 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365276#comment-15365276
 ] 

Ray Chiang commented on YARN-5283:
--

RE: Javadoc errors

I see these in trunk even without my changes, although they are warnings 
instead of errors (presumably a test-patch thing).  Looks related to new 
Javadoc warnings in JDK8.

RE: Failing unit tests

Unit tests pass in my tree.


> Refactor container assignment into AbstractYarnScheduler#assignContainers
> -
>
> Key: YARN-5283
> URL: https://issues.apache.org/jira/browse/YARN-5283
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager, 
> scheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5283.001.patch
>
>
> CapacityScheduler#allocateContainersToNode() and 
> FairScheduler#attemptScheduling() have some common code that can be 
> refactored into a common abstract method like 
> AbstractYarnScheduler#assignContainers().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365269#comment-15365269
 ] 

Li Lu commented on YARN-5229:
-

Yes. Let's put this into the "on-hold" list and commit it after merge. 

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365264#comment-15365264
 ] 

Vrushali C commented on YARN-5229:
--


This is a post merge check in candidate.  I don't think it needs to go in 
before the merge. 

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-07-06 Thread Hitesh Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365262#comment-15365262
 ] 

Hitesh Sharma commented on YARN-5216:
-

Thank you for the insights, [~kkaranasos]!

Sorry, rebalancing wasn't the right terminology to use. I was referring to 
killing of queued containers that happens during 
{{shedQueuedOpportunisticContainers}} to enforce the queue limits, which in 
turns follows the paths you mention above.

It might be a good idea to use start container to imply resume when the 
container is paused, but at the same time it also overloads the meaning of 
start container and given how different they are it can impose some challenges. 
Anyways, we can discuss this more in [YARN-5292].

{quote}
As far as I can see, all you need from the NM to support preemption is (let me 
know if there are more things that I am missing):
# Determine the way a container stops (option 1: kill, option 2: preempt).
# Determine the way it start (that is, resume it if it's paused, instead of 
starting it from the beginning).
# Decide which container to start (you might want to start first containers 
that are paused instead of new ones).
{quote}

How do you propose to do 3 without having an extension point to pick a 
container to start? The moment we have an extension point to pick a container 
to start we also need an extension point to pick up a container to kill for 
enforcing queue limits or something else.

Appreciate the feedback and help. Thanks a lot!




> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365260#comment-15365260
 ] 

Sangjin Lee commented on YARN-5229:
---

Just to be clear, are we considering committing this *before* the merge? I'd 
assumed that we're going to resume these post-merge.

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-07-06 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365259#comment-15365259
 ] 

Inigo Goiri commented on YARN-5215:
---

In our internal deployment, we always reserve a buffer for the external load to 
spike. This is set by tuning the available cores and memory.

[~jlowe], as you mention, we internally have preemption at both RM and NM 
level. We only enable the one at NM level as it's the one with the best latency 
and we don't have a need for the RM level one. As I mention in a previous 
comment, this patch it's just to do scheduling in the RM, if we want to go with 
the full solution, we would need:
* Schedule containers considering external load in the RM
* Expose external load in the UI
* Use history to smooth external load
* Preempting containers from the RM based on external load
* Preempting containers from the NM based on external load

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5315) Standby RM keep sending start am container request to NM

2016-07-06 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365256#comment-15365256
 ] 

sandflee commented on YARN-5315:


bq. shutdownNow() still does not wait for actively executing tasks to 
terminate. You should make an explicit awaitTermination() call to do that.
awaitTerminatioin will block until all tasks terminate, this may delay the stop 
process of other service, should we do that?

bq. We should also call "super.serviceStop()" to complete the life-cycle, can 
you make that change too?
will do


> Standby RM keep sending start am container request to NM
> 
>
> Key: YARN-5315
> URL: https://issues.apache.org/jira/browse/YARN-5315
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-5315.01.patch
>
>
> 1, network partitions, RM couldn't connect to NMs and start AM request pending
> 2, RM becomes standby, int ApplicatioinMasterLauncher#serviceStop, 
> launcherPool are shutdown. the launching thread are interrupted, but start AM 
> request may still left in Queue
> 3,network reconnect,  standby RM sends start AM request to NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365237#comment-15365237
 ] 

Li Lu commented on YARN-5229:
-

Let me kick jenkins again on this patch... 

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365239#comment-15365239
 ] 

Hadoop QA commented on YARN-5229:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 3s {color} 
| {color:red} Docker failed to build yetus/hadoop:e2f6409. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816514/YARN-5229-YARN-2928.04.patch
 |
| JIRA Issue | YARN-5229 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12212/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365231#comment-15365231
 ] 

Hadoop QA commented on YARN-5229:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 2m 33s 
{color} | {color:red} Docker failed to build yetus/hadoop:e2f6409. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816514/YARN-5229-YARN-2928.04.patch
 |
| JIRA Issue | YARN-5229 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12211/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5270) Solve miscellaneous issues caused by YARN-4844

2016-07-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365218#comment-15365218
 ] 

Andrew Wang commented on YARN-5270:
---

Thanks for the reply Wangda. To clarify, I'm going to cut a 3.0.0-alpha1 branch 
off of trunk, and was thinking about reverting YARN-4844 just from the release 
branch (not trunk). Seems like we're close on the patch here though anyway, 
which is great news.

Thanks for the ping about HADOOP-12893, I missed that there was an addendum 
patch pending for trunk. I'm tracking it more closely now.

> Solve miscellaneous issues caused by YARN-4844
> --
>
> Key: YARN-5270
> URL: https://issues.apache.org/jira/browse/YARN-5270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5270-branch-2.001.patch, 
> YARN-5270-branch-2.002.patch, YARN-5270-branch-2.003.patch, 
> YARN-5270-branch-2.8.001.patch, YARN-5270-branch-2.8.002.patch, 
> YARN-5270-branch-2.8.003.patch, YARN-5270.003.patch
>
>
> Such as javac warnings reported by YARN-5077 and type converting issues in 
> Resources class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365209#comment-15365209
 ] 

Li Lu commented on YARN-5229:
-

Thanks [~vrushalic]. +1 pending Jenkins. 

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5233) Support for specifying a path for ATS plugin jars

2016-07-06 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365202#comment-15365202
 ] 

Jian He commented on YARN-5233:
---

lgtm

> Support for specifying a path for ATS plugin jars
> -
>
> Key: YARN-5233
> URL: https://issues.apache.org/jira/browse/YARN-5233
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5233-trunk.001.patch, YARN-5233-trunk.002.patch, 
> YARN-5233-trunk.003.patch
>
>
> Third-party plugins need to add their jars to ATS. Most of the times, 
> isolation is not needed. However, there needs to be a way to specify the 
> path. For now, the jars on that path can be added to default classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365197#comment-15365197
 ] 

Vrushali C edited comment on YARN-5229 at 7/6/16 10:03 PM:
---

Thanks for the review [~gtCarrera9]], uploading v4 that addresses the build 
report flags for license and javadoc


was (Author: vrushalic):
Thanks for the review [~gtCarrera], uploading v4 that addresses the build 
report flags for license and javadoc

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365197#comment-15365197
 ] 

Vrushali C edited comment on YARN-5229 at 7/6/16 10:04 PM:
---

Thanks for the review [~gtCarrera9], uploading v4 that addresses the build 
report flags for license and javadoc


was (Author: vrushalic):
Thanks for the review [~gtCarrera9]], uploading v4 that addresses the build 
report flags for license and javadoc

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5229:
-
Attachment: YARN-5229-YARN-2928.04.patch

Thanks for the review [~gtCarrera], uploading v4 that addresses the build 
report flags for license and javadoc

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365163#comment-15365163
 ] 

Hudson commented on YARN-5224:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10056 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10056/])
YARN-5224. Added new web-services /containers/{containerid}/logs & (vinodkv: 
rev 4c9e1aeb94247a6e97215e902bdc71a325244243)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/ContainerLogsInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServices.java


> Logs for a completed container are not available in the yarn logs output for 
> a live application
> ---
>
> Key: YARN-5224
> URL: https://issues.apache.org/jira/browse/YARN-5224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, 
> YARN-5224.4.patch, YARN-5224.5.patch, YARN-5224.6.patch, 
> YARN-5224.6.trunk.patch, YARN-5224.7.trunk.patch, YARN-5224.8.branch-2.patch, 
> YARN-5224.8.trunk.patch
>
>
> This affects 'short' jobs like MapReduce and Tez more than long running apps.
> Related: YARN-5193 (but that only covers long running apps)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5283) Refactor container assignment into AbstractYarnScheduler#assignContainers

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365158#comment-15365158
 ] 

Hadoop QA commented on YARN-5283:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
29s {color} | {color:green} root: The patch generated 0 new + 603 unchanged - 5 
fixed = 603 total (was 608) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 15s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s 
{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816491/YARN-5283.001.patch |
| JIRA Issue | YARN-5283 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4ef46def390b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 04f6ebb |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12207/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12207/artifac

[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-07-06 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365101#comment-15365101
 ] 

Konstantinos Karanasos commented on YARN-5216:
--

Thanks for the reply, [~hrsharma].

Some observations:
* The only operations that the NM needs to worry about is starting and stopping 
of containers. Rescheduling/rebalancing is not something done at the NM. The NM 
simply kills a container, the AM gets notified and resends the container 
request to the RM that in turn reschedules it.
* The way a paused container resumes its execution should be encapsulated in 
the container lifecycle. All the ContainersManager needs to do is call a 
startContainer and the container should resume if it is paused. So I think 
there is no need for a dedicated {{OpportunisticContainerManager}}.

As far as I can see, all you need from the NM to support preemption is (let me 
know if there are more things that I am missing):
# Determine the way a container stops (option 1: kill, option 2: preempt).
# Determine the way it start (that is, resume it if it's paused, instead of 
starting it from the beginning).
# Decide which container to start (you might want to start first containers 
that are paused instead of new ones).

So essentially you only need to tune the startContainer and stopContainer 
methods.

Given the above, I can see two options:
# Subclass the {{QueuingContainersManagerImpl}} and override the startContainer 
and stopContainer related methods.
# Add a preemption policy that gets plugged in the above methods.

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5224) Logs for a completed container are not available in the yarn logs output for a live application

2016-07-06 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365091#comment-15365091
 ] 

Vinod Kumar Vavilapalli commented on YARN-5224:
---

Thanks for the update. The TestYarnClient reservation failures are not 
reproducible locally. Seem like transient issues - will file a ticket.

Latest patch looks good to me. +1, checking this in.

> Logs for a completed container are not available in the yarn logs output for 
> a live application
> ---
>
> Key: YARN-5224
> URL: https://issues.apache.org/jira/browse/YARN-5224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
> Attachments: YARN-5224.1.patch, YARN-5224.2.patch, YARN-5224.3.patch, 
> YARN-5224.4.patch, YARN-5224.5.patch, YARN-5224.6.patch, 
> YARN-5224.6.trunk.patch, YARN-5224.7.trunk.patch, YARN-5224.8.branch-2.patch, 
> YARN-5224.8.trunk.patch
>
>
> This affects 'short' jobs like MapReduce and Tez more than long running apps.
> Related: YARN-5193 (but that only covers long running apps)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-07-06 Thread Hitesh Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365085#comment-15365085
 ] 

Hitesh Sharma commented on YARN-5216:
-

[~kkaranasos], I'm not sure if subclassing would work. We need to have more 
control on how the opportunistic containers are queued and how we start/preempt 
them. From a design point of view also {{QueuingContainersPreemptionManagerImpl 
}} is not really a queuing container manager, but just a specific way to 
preempt queued opportunistic containers. Thus composition seems a better choice 
here. 

Thank you.

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-07-06 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365070#comment-15365070
 ] 

Jason Lowe commented on YARN-5215:
--

Maybe I'm missing something, but any of the proposed approaches has YARN 
assuming it can leverage the unused resources on the node.  That's sort of the 
whole point, we want YARN to use those unused resources rather than just 
hard-partitioning the node between YARN and the other system.  Some of the 
approaches start with the assumption that the whole node belongs to YARN and 
YARN will scale back usage of the node based on utilization feedback, while 
other approaches start with YARN assuming it has a smaller portion of the node 
and can reach beyond it when utilization is low.  It's the same scenario from 
two perspectives.

IIUC any of these approaches can react relatively quickly to the other 
workload's demands by having the nodemanager take action directly (by 
preempting containers) when the periodically monitored node utilization goes 
above some configured limit.  The original proposal in this JIRA doesn't do 
that, which means it won't be super-responsive to the other subsystem.   The RM 
won't allocate any additional containers when the utilization gets high, but 
some of the containers would have to exit on their own before YARN's existing 
utilization would decrease.  It sounds like the version Inigo has deployed in 
production does do some sort of preemption, but it sounded like it was coming 
from the RM rather than the NM which would be slightly slower response time 
than if the NM did it directly.

If the latency demands of the other workload are so severe that it's impossible 
for YARN to react quickly enough then I don't see how YARN can leverage those 
resources when they are unused.  We'd have to resort to some kind of 
hard-partitioning (either giving the nodemanager less resources than the node 
actually has or using proxy containers in YARN on behalf of the other workload 
to reserve the resources) and live with the underutilization of those resources 
when the other workload is idle.

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4676) Automatic and Asynchronous Decommissioning Nodes Status Tracking

2016-07-06 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363095#comment-15363095
 ] 

Robert Kanter edited comment on YARN-4676 at 7/6/16 8:57 PM:
-

Yes, it tracks in both the RM and the client.  Looking at {{RMAdminCLI}}, it 
should track on the client side if you specify a timeout as an argument; if 
not, it won't.  e.g. {{yarn rmadmin -refreshNodes -graceful 100}} will track on 
the client side (and the RM) while {{yarn rmadmin -refreshNodes -graceful}} 
will only track in the RM (client will exit).  [~djp], which did you try?


was (Author: rkanter):
Yes, it tracks in both the RM and the client.  Looking at {{RMAdminCLI}}, it 
should track on the client side if you specify a timeout as an argument; if 
not, it won't.  e.g. {{yarn rmadmin -refreshNodes -graceful 100}} will track on 
the client side (and the RM) while {{yarn rmadmin -refreshNodes -graceful}} 
will only track in the RM (client will exit).

> Automatic and Asynchronous Decommissioning Nodes Status Tracking
> 
>
> Key: YARN-4676
> URL: https://issues.apache.org/jira/browse/YARN-4676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Zhi
>Assignee: Daniel Zhi
>  Labels: features
> Attachments: GracefulDecommissionYarnNode.pdf, 
> GracefulDecommissionYarnNode.pdf, YARN-4676.004.patch, YARN-4676.005.patch, 
> YARN-4676.006.patch, YARN-4676.007.patch, YARN-4676.008.patch, 
> YARN-4676.009.patch, YARN-4676.010.patch, YARN-4676.011.patch, 
> YARN-4676.012.patch, YARN-4676.013.patch, YARN-4676.014.patch, 
> YARN-4676.015.patch, YARN-4676.016.patch
>
>
> YARN-4676 implements an automatic, asynchronous and flexible mechanism to 
> graceful decommission
> YARN nodes. After user issues the refreshNodes request, ResourceManager 
> automatically evaluates
> status of all affected nodes to kicks out decommission or recommission 
> actions. RM asynchronously
> tracks container and application status related to DECOMMISSIONING nodes to 
> decommission the
> nodes immediately after there are ready to be decommissioned. Decommissioning 
> timeout at individual
> nodes granularity is supported and could be dynamically updated. The 
> mechanism naturally supports multiple
> independent graceful decommissioning “sessions” where each one involves 
> different sets of nodes with
> different timeout settings. Such support is ideal and necessary for graceful 
> decommission request issued
> by external cluster management software instead of human.
> DecommissioningNodeWatcher inside ResourceTrackingService tracks 
> DECOMMISSIONING nodes status automatically and asynchronously after 
> client/admin made the graceful decommission request. It tracks 
> DECOMMISSIONING nodes status to decide when, after all running containers on 
> the node have completed, will be transitioned into DECOMMISSIONED state. 
> NodesListManager detect and handle include and exclude list changes to kick 
> out decommission or recommission as necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5233) Support for specifying a path for ATS plugin jars

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365061#comment-15365061
 ] 

Hadoop QA commented on YARN-5233:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 216 unchanged - 1 fixed = 216 total (was 217) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s 
{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 2s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816463/YARN-5233-trunk.003.patch
 |
| JIRA Issue | YARN-5233 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux cb1a5b00cb19 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 04f6ebb |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12208/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoo

[jira] [Commented] (YARN-5314) ConcurrentModificationException in ATS v1.5 EntityGroupFSTimelineStore

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365043#comment-15365043
 ] 

Hadoop QA commented on YARN-5314:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s 
{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816494/YARN-5314-trunk.002.patch
 |
| JIRA Issue | YARN-5314 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 724a49416718 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d169f50 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12209/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12209/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> ConcurrentModificationException in ATS v1.5 EntityGroupFSTimelineStore
> --
>
> Key: YARN-5314
> URL: https://issues.apache.org/jira/browse/YARN-5314
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>

[jira] [Updated] (YARN-5270) Solve miscellaneous issues caused by YARN-4844

2016-07-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5270:
-
Attachment: YARN-5270.003.patch

Attached patch for trunk as well.

[~andrew.wang], I think it's not necessary to revert YARN-4844 from trunk 
because of this pending patch. The biggest issue that this patch trying to 
solve is binary compatibility. Since compatibility (not only binary) is not 
required between hadoop-2 and hadoop-3. I don't think we need to make 3.0.0 
release blocked by this.

However, since this addon patch updated some of the public APIs, it will be 
good to add it to first 3.0.0 release, but again,  IMO, it's not MUST.

BTW, do you think we should include L&N patch to 3.0.0-alpha1 release as well? 
It is still in open state.

[~kasha], I just attached patch for trunk, could you take look at it?

Thanks,

> Solve miscellaneous issues caused by YARN-4844
> --
>
> Key: YARN-5270
> URL: https://issues.apache.org/jira/browse/YARN-5270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5270-branch-2.001.patch, 
> YARN-5270-branch-2.002.patch, YARN-5270-branch-2.003.patch, 
> YARN-5270-branch-2.8.001.patch, YARN-5270-branch-2.8.002.patch, 
> YARN-5270-branch-2.8.003.patch, YARN-5270.003.patch
>
>
> Such as javac warnings reported by YARN-5077 and type converting issues in 
> Resources class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5227) yarn logs command: no need to specify -applicationId when specifying containerId

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365036#comment-15365036
 ] 

Hudson commented on YARN-5227:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10055 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10055/])
YARN-5227. Yarn logs command: no need to specify applicationId when (jianhe: 
rev d169f5052fe83debcea7cf2f317dcd990890a857)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestLogsCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java


> yarn logs command: no need to specify -applicationId when specifying 
> containerId
> 
>
> Key: YARN-5227
> URL: https://issues.apache.org/jira/browse/YARN-5227
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Gergely Novák
> Fix For: 2.9.0
>
> Attachments: YARN-5227.001.patch, YARN-5227.002.patch
>
>
> No need to specify -applicaionId when specifying containerId, because 
> applicationId is retrievable from containerId



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5314) ConcurrentModificationException in ATS v1.5 EntityGroupFSTimelineStore

2016-07-06 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5314:

Attachment: YARN-5314-trunk.002.patch

New patch to address checkstyle comments. 

> ConcurrentModificationException in ATS v1.5 EntityGroupFSTimelineStore
> --
>
> Key: YARN-5314
> URL: https://issues.apache.org/jira/browse/YARN-5314
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Karam Singh
>Assignee: Li Lu
> Attachments: YARN-5314-trunk.001.patch, YARN-5314-trunk.002.patch
>
>
> ConcurrentModificationException seen in ATS logs while getting Entities in 
> ATS log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5283) Refactor container assignment into AbstractYarnScheduler#assignContainers

2016-07-06 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5283:
-
Attachment: YARN-5283.001.patch

Initial refactor attempt

> Refactor container assignment into AbstractYarnScheduler#assignContainers
> -
>
> Key: YARN-5283
> URL: https://issues.apache.org/jira/browse/YARN-5283
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager, 
> scheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5283.001.patch
>
>
> CapacityScheduler#allocateContainersToNode() and 
> FairScheduler#attemptScheduling() have some common code that can be 
> refactored into a common abstract method like 
> AbstractYarnScheduler#assignContainers().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15365007#comment-15365007
 ] 

Li Lu commented on YARN-5229:
-

Refactor LGTM. [~vrushalic] could you please address the two issues raised by 
hadoop qa? I think those two points are valid. Thanks! 

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5314) ConcurrentModificationException in ATS v1.5 EntityGroupFSTimelineStore

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364994#comment-15364994
 ] 

Hadoop QA commented on YARN-5314:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage:
 The patch generated 6 new + 6 unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s 
{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 12s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816481/YARN-5314-trunk.001.patch
 |
| JIRA Issue | YARN-5314 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c2c865fb8e89 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 04f6ebb |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12206/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12206/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12206/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (YARN-5316) fix hadoop-aws pom not to do the exclusion

2016-07-06 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364989#comment-15364989
 ] 

Li Lu commented on YARN-5316:
-

+1. Fix LGTM. Thanks for the work Sangjin! 

> fix hadoop-aws pom not to do the exclusion
> --
>
> Key: YARN-5316
> URL: https://issues.apache.org/jira/browse/YARN-5316
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5316-YARN-2928.01.patch
>
>
> We originally introduced an exclusion rule for {{hadoop-yarn-server-tests}} 
> in {{hadoop-aws}}, as the {{hadoop-aws}} dependency on {{joda-time}} was 
> colliding with that coming from {{hadoop-yarn-server-timelineservice}} (via 
> {{phoenix-core}} ).
> Now that the phoenix dependency is no longer on 
> {{hadoop-yarn-server-timelineservice}} itself (it's moved to 
> {{hadoop-yarn-server-timelineservice-hbase-tests}} ), it is safe to remove 
> the exclusion rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5314) ConcurrentModificationException in ATS v1.5 EntityGroupFSTimelineStore

2016-07-06 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5314:

Attachment: YARN-5314-trunk.001.patch

Thanks [~karams] for reporting this issue! The two exceptions are for different 
reasons, and let's focusing on the concurrent modification exception in this 
JIRA (maybe you'd like to log another one for the leveldb store creation 
problem? ). In the current patch I did a quick refactor to clean up the 
synchronization logic of detail log list, which will address the problem. 

> ConcurrentModificationException in ATS v1.5 EntityGroupFSTimelineStore
> --
>
> Key: YARN-5314
> URL: https://issues.apache.org/jira/browse/YARN-5314
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Karam Singh
>Assignee: Li Lu
> Attachments: YARN-5314-trunk.001.patch
>
>
> ConcurrentModificationException seen in ATS logs while getting Entities in 
> ATS log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5294) Pass remote ip address down to YarnAuthorizationProvider

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364961#comment-15364961
 ] 

Hadoop QA commented on YARN-5294:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
21s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 194 unchanged - 1 fixed = 194 total (was 195) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 11s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 17s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 38s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 6s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warning

[jira] [Commented] (YARN-3854) Add localization support for docker images

2016-07-06 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364957#comment-15364957
 ] 

Sidharta Seethana commented on YARN-3854:
-

[~tangzhankun], thanks for uploading the doc and the patch. I'll take a look at 
them both and get back to you.

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Zhankun Tang
> Attachments: YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf
>
>
> We need the ability to localize images from HDFS and load them for use when 
> launching docker containers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5302) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token II

2016-07-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364953#comment-15364953
 ] 

Varun Saxena commented on YARN-5302:


I think changes here will be required irrespective of YARN-5175. Delaying 
creating folders(done inside initAppAggregator) is a better solution IMO 
because this takes care of the case where NM is shut down while updating the 
token in state store.
Maybe we can store the apps for which initialization failed due to invalid 
token somewhere(maybe in NMContext) and process them on next HB.

> Yarn Application log Aggreagation fails due to NM can not get correct HDFS 
> delegation token II
> --
>
> Key: YARN-5302
> URL: https://issues.apache.org/jira/browse/YARN-5302
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
> Attachments: YARN-5032.001.patch, YARN-5032.002.patch, 
> YARN-5302.003.patch, YARN-5302.004.patch
>
>
> Different with YARN-5098, this happens at NM side. When NM recovers, 
> credentials are read from NMStateStore. When initialize app aggregators, 
> exception happens because of the overdue tokens. The app is a long running 
> service.
> {code:title=LogAggregationService.java}
>   protected void initAppAggregator(final ApplicationId appId, String user,
>   Credentials credentials, ContainerLogsRetentionPolicy 
> logRetentionPolicy,
>   Map appAcls,
>   LogAggregationContext logAggregationContext) {
> // Get user's FileSystem credentials
> final UserGroupInformation userUgi =
> UserGroupInformation.createRemoteUser(user);
> if (credentials != null) {
>   userUgi.addCredentials(credentials);
> }
>...
> try {
>   // Create the app dir
>   createAppDir(user, appId, userUgi);
> } catch (Exception e) {
>   appLogAggregator.disableLogAggregation();
>   if (!(e instanceof YarnRuntimeException)) {
> appDirException = new YarnRuntimeException(e);
>   } else {
> appDirException = (YarnRuntimeException)e;
>   }
>   appLogAggregators.remove(appId);
>   closeFileSystems(userUgi);
>   throw appDirException;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5302) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token II

2016-07-06 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364953#comment-15364953
 ] 

Varun Saxena edited comment on YARN-5302 at 7/6/16 7:39 PM:


I think changes here will be required irrespective of YARN-5175. Delaying 
creating folders(done inside initAppAggregator) is a better solution IMO 
because this takes care of the case where NM is shut down while updating the 
token in state store.
Maybe we can store the apps for which initialization failed due to invalid 
token somewhere(maybe in NMContext) and process them on next HB.


was (Author: varun_saxena):
I think changes here will be required irrespective of YARN-5175. Delaying 
creating folders(done inside initAppAggregator) is a better solution IMO 
because this takes care of the case where NM is shut down while updating the 
token in state store.
Maybe we can store the apps for which initialization failed due to invalid 
token somewhere(maybe in NMContext) and process them on next HB.

> Yarn Application log Aggreagation fails due to NM can not get correct HDFS 
> delegation token II
> --
>
> Key: YARN-5302
> URL: https://issues.apache.org/jira/browse/YARN-5302
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
> Attachments: YARN-5032.001.patch, YARN-5032.002.patch, 
> YARN-5302.003.patch, YARN-5302.004.patch
>
>
> Different with YARN-5098, this happens at NM side. When NM recovers, 
> credentials are read from NMStateStore. When initialize app aggregators, 
> exception happens because of the overdue tokens. The app is a long running 
> service.
> {code:title=LogAggregationService.java}
>   protected void initAppAggregator(final ApplicationId appId, String user,
>   Credentials credentials, ContainerLogsRetentionPolicy 
> logRetentionPolicy,
>   Map appAcls,
>   LogAggregationContext logAggregationContext) {
> // Get user's FileSystem credentials
> final UserGroupInformation userUgi =
> UserGroupInformation.createRemoteUser(user);
> if (credentials != null) {
>   userUgi.addCredentials(credentials);
> }
>...
> try {
>   // Create the app dir
>   createAppDir(user, appId, userUgi);
> } catch (Exception e) {
>   appLogAggregator.disableLogAggregation();
>   if (!(e instanceof YarnRuntimeException)) {
> appDirException = new YarnRuntimeException(e);
>   } else {
> appDirException = (YarnRuntimeException)e;
>   }
>   appLogAggregators.remove(appId);
>   closeFileSystems(userUgi);
>   throw appDirException;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364935#comment-15364935
 ] 

Hadoop QA commented on YARN-5229:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
27s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 45s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s 
{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e2f6409 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816461/YARN-5229-YARN-2928.03.patch
 |
| JIRA Issue | YARN-5229 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux faf429578508 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2928 / 27550a4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12205/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12205/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12205/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-pr

[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-07-06 Thread Hitesh Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364893#comment-15364893
 ] 

Hitesh Sharma commented on YARN-5216:
-

[~asuresh], [~kkaranasos]], thank you for the feedback and comments.

Regarding the refactoring being done and the reason to pull queues into the 
currently named {{OpportunisticContainerManager}}:

Roughly speaking the {{QueuingContainersManagerImpl}} does the following for 
starting and stopping opportunistic containers:


* A running container simply gets preempted while a container waiting in the 
queue is removed and RM is notified to reallocate it elsewhere.
* Periodically it is checked if there are too many waiting containers in the 
queue and they are removed so RM can rebalance them. 
* When a running container finishes then a waiting opportunistic container will 
be run if there are no guaranteed waiting in the queue. 

If the preemption policy is to kill the container then things are a little 
simpler and you can leave the opportunistic container queue within 
{{QueuingContainersManagerImpl}}. However if the preemption policy is different 
then we need extension points to know about the operations that the 
{{QueuingContainersManagerImpl}} wants to do and respond appropriately. Say the 
preemption policy is to put the container in a pause state so that it can be 
resumed once there is some room to run a container. This requires to 
distinguish between whether the {{QueuingContainersManagerImpl}} is looking to 
run pending containers (e.g. we want to resume a preempted container over an OC 
which is still waiting in the queue) or is looking to rebalance waiting 
containers to other nodes (e.g. we can't reallocate a container in the pause 
state). For pretty much these reasons the pluggable policy is named as 
{{OpportunisticContainerManager}} as it allows you to preempt and start the 
opportunistic containers and also manages the queue of the opportunistic 
containers. I'm open to suggestion on how to do this differently without having 
to change {{QueuingContainersManagerImpl}} a lot.

[~asuresh], can you elaborate a little why {{queuedGuaranteedContainers}} 
should also be pulled into the {{OpportunisticContainerManagerImpl}}?

I will look into using ServiceLoader framework over reflection and add an extra 
constant to determine the default value.

Thank a lot for the feedback and comments.

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5316) fix hadoop-aws pom not to do the exclusion

2016-07-06 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364887#comment-15364887
 ] 

Vrushali C commented on YARN-5316:
--

+1 
Thanks for the patch Sangjin! I can commit it in at EOD today.

> fix hadoop-aws pom not to do the exclusion
> --
>
> Key: YARN-5316
> URL: https://issues.apache.org/jira/browse/YARN-5316
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5316-YARN-2928.01.patch
>
>
> We originally introduced an exclusion rule for {{hadoop-yarn-server-tests}} 
> in {{hadoop-aws}}, as the {{hadoop-aws}} dependency on {{joda-time}} was 
> colliding with that coming from {{hadoop-yarn-server-timelineservice}} (via 
> {{phoenix-core}} ).
> Now that the phoenix dependency is no longer on 
> {{hadoop-yarn-server-timelineservice}} itself (it's moved to 
> {{hadoop-yarn-server-timelineservice-hbase-tests}} ), it is safe to remove 
> the exclusion rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5315) Standby RM keep sending start am container request to NM

2016-07-06 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364888#comment-15364888
 ] 

Vinod Kumar Vavilapalli commented on YARN-5315:
---

shutdownNow() still does not wait for actively executing tasks to terminate. 
You should make an explicit awaitTermination() call to do that.

We should also call "super.serviceStop()" to complete the life-cycle, can you 
make that change too?

> Standby RM keep sending start am container request to NM
> 
>
> Key: YARN-5315
> URL: https://issues.apache.org/jira/browse/YARN-5315
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-5315.01.patch
>
>
> 1, network partitions, RM couldn't connect to NMs and start AM request pending
> 2, RM becomes standby, int ApplicatioinMasterLauncher#serviceStop, 
> launcherPool are shutdown. the launching thread are interrupted, but start AM 
> request may still left in Queue
> 3,network reconnect,  standby RM sends start AM request to NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4939) the decommissioning Node should keep alive if NM restart

2016-07-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364886#comment-15364886
 ] 

Hadoop QA commented on YARN-4939:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 73 unchanged - 2 fixed = 73 total (was 75) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 49s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12816468/YARN-4939.05.patch |
| JIRA Issue | YARN-4939 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3e52f3a97fd5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 04f6ebb |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12204/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12204/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12204/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resour

[jira] [Commented] (YARN-5270) Solve miscellaneous issues caused by YARN-4844

2016-07-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364865#comment-15364865
 ] 

Andrew Wang commented on YARN-5270:
---

I think this is the last JIRA we're waiting on for 3.0.0-alpha1. If we can't 
get it done by end-of-week, can I unblock the release by reverting YARN-4844 
from the 3.0.0-alpha1 release branch and picking it up in a later alpha?

> Solve miscellaneous issues caused by YARN-4844
> --
>
> Key: YARN-5270
> URL: https://issues.apache.org/jira/browse/YARN-5270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5270-branch-2.001.patch, 
> YARN-5270-branch-2.002.patch, YARN-5270-branch-2.003.patch, 
> YARN-5270-branch-2.8.001.patch, YARN-5270-branch-2.8.002.patch, 
> YARN-5270-branch-2.8.003.patch
>
>
> Such as javac warnings reported by YARN-5077 and type converting issues in 
> Resources class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5316) fix hadoop-aws pom not to do the exclusion

2016-07-06 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364843#comment-15364843
 ] 

Sangjin Lee commented on YARN-5316:
---

Let me know what you think. I'd like to make this an exception and get it into 
YARN-2928 before the merge as it was raised during the merge vote. Thanks!

> fix hadoop-aws pom not to do the exclusion
> --
>
> Key: YARN-5316
> URL: https://issues.apache.org/jira/browse/YARN-5316
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5316-YARN-2928.01.patch
>
>
> We originally introduced an exclusion rule for {{hadoop-yarn-server-tests}} 
> in {{hadoop-aws}}, as the {{hadoop-aws}} dependency on {{joda-time}} was 
> colliding with that coming from {{hadoop-yarn-server-timelineservice}} (via 
> {{phoenix-core}} ).
> Now that the phoenix dependency is no longer on 
> {{hadoop-yarn-server-timelineservice}} itself (it's moved to 
> {{hadoop-yarn-server-timelineservice-hbase-tests}} ), it is safe to remove 
> the exclusion rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-07-06 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364799#comment-15364799
 ] 

Konstantinos Karanasos commented on YARN-5216:
--

Thinking more about it...
Another option would be to create a class 
{{QueuingContainersPreemptionManagerImpl}} that subclasses the 
{{QueuingContainersManagerImpl}}.
This way you don't need to touch the {{QueuingContainersManagerImpl}} (which 
will provide the default option of killing containers), and you can override 
any of the methods you need (e.g., stopContainer) in the subclass.
That will give you the flexibility to directly access any of the fields of the 
{{QueuingContainersManagerImpl}} in your subclass.

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: YARN5216.001.patch, yarn5216.002.patch
>
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-07-06 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364765#comment-15364765
 ] 

Karthik Kambatla commented on YARN-5215:


Spoke to Inigo offline at the summit. 

My primary concern is with assuming unused resources on the node can be used by 
YARN. It is not uncommon for users to be running something else besides YARN on 
the worker nodes. While these external processes might not be using any 
resources at the time, they might be high-priority workloads that need to be 
able to use those resources immediately.

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5294) Pass remote ip address down to YarnAuthorizationProvider

2016-07-06 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5294:
--
Attachment: YARN-5294-branch-2.patch

upload branch-2 patch

> Pass remote ip address down to YarnAuthorizationProvider
> 
>
> Key: YARN-5294
> URL: https://issues.apache.org/jira/browse/YARN-5294
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5294-branch-2.patch, YARN-5294.1.patch, 
> YARN-5294.2.patch, YARN-5294.3.patch, YARN-5294.4.patch, YARN-5294.5.patch, 
> YARN-5294.6.patch
>
>
> Pass down the remote ip address down to authorizer. Underlying authorizer 
> implementation can make use of this information for its own authorization 
> rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4939) the decommissioning Node should keep alive if NM restart

2016-07-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-4939:
-
Attachment: YARN-4939.05.patch

Put a patch with this minor fix and adjust some import. Will commit if Mr. 
Jenkins give it a +1.

> the decommissioning Node should keep alive  if NM restart
> -
>
> Key: YARN-4939
> URL: https://issues.apache.org/jira/browse/YARN-4939
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-4939.01.patch, YARN-4939.02.patch, 
> YARN-4939.03.patch, YARN-4939.04.patch, YARN-4939.05.patch
>
>
> 1, gracefully decommission a node A
> 2, restart node A
> 3, node A could not register to RM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5233) Support for specifying a path for ATS plugin jars

2016-07-06 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364758#comment-15364758
 ] 

Li Lu commented on YARN-5233:
-

Oh and, our ApplicationClassLoader will delegate the loadClass process to its 
parent after the load fails in its code:
{code}
if (c == null && !isSystemClass(name, systemClasses)) {
  // Try to load class from this classloader's URLs. Note that this is like
  // the servlet spec, not the usual Java 2 behaviour where we ask the
  // parent to attempt to load first.
  try {
c = findClass(name);
if (LOG.isDebugEnabled() && c != null) {
  LOG.debug("Loaded class: " + name + " ");
}
  } catch (ClassNotFoundException e) {
if (LOG.isDebugEnabled()) {
  LOG.debug(e);
}
ex = e;
  }
}

if (c == null) { // try parent
  c = parent.loadClass(name);
  if (LOG.isDebugEnabled() && c != null) {
LOG.debug("Loaded class from parent: " + name + " ");
  }
}
{code}

> Support for specifying a path for ATS plugin jars
> -
>
> Key: YARN-5233
> URL: https://issues.apache.org/jira/browse/YARN-5233
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5233-trunk.001.patch, YARN-5233-trunk.002.patch, 
> YARN-5233-trunk.003.patch
>
>
> Third-party plugins need to add their jars to ATS. Most of the times, 
> isolation is not needed. However, there needs to be a way to specify the 
> path. For now, the jars on that path can be added to default classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5294) Pass remote ip address down to YarnAuthorizationProvider

2016-07-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364748#comment-15364748
 ] 

Hudson commented on YARN-5294:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10054 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10054/])
YARN-5294. Pass remote ip address down to YarnAuthorizationProvider. (wangda: 
rev 04f6ebb66a4ffc04a635ab9c0234080f290b39f2)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/AccessRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/QueueACLsManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationACLs.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestClientRMService.java


> Pass remote ip address down to YarnAuthorizationProvider
> 
>
> Key: YARN-5294
> URL: https://issues.apache.org/jira/browse/YARN-5294
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5294.1.patch, YARN-5294.2.patch, YARN-5294.3.patch, 
> YARN-5294.4.patch, YARN-5294.5.patch, YARN-5294.6.patch
>
>
> Pass down the remote ip address down to authorizer. Underlying authorizer 
> implementation can make use of this information for its own authorization 
> rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >