[jira] [Updated] (YARN-5258) Document Use of Docker with LinuxContainerExecutor

2017-02-08 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5258:

Fix Version/s: 2.9.0

> Document Use of Docker with LinuxContainerExecutor
> --
>
> Key: YARN-5258
> URL: https://issues.apache.org/jira/browse/YARN-5258
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-easy
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-5258.001.patch, YARN-5258.002.patch, 
> YARN-5258.003.patch, YARN-5258.004.patch, YARN-5258.005.patch
>
>
> There aren't currently any docs that explain how to configure Docker and all 
> of its various options aside from reading all of the JIRAs.  We need to 
> document the configuration, use, and troubleshooting, along with helpful 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Improve /flows API for more flexible filters fromid, collapse, userid

2017-02-08 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859062#comment-15859062
 ] 

Rohith Sharma K S commented on YARN-6027:
-

[~varun_saxena]
bq. Cant we apply PageFilter in steps in collapse mode? 
Initially I had in mind to do as you suggested in code base but it is required 
scan every time. Instead of scanning every time,it is better to scan once and 
get scanner once. However, ResultScanner is being used for retrieving which 
iterate through the keys should be fine. Connecting to next regions and other 
stuff will be handled by HBase client i.e ResultScanner. And also to apply 
fromId for collapsed data, first need to read all the data and apply fromid.

> Improve /flows API for more flexible filters fromid, collapse, userid
> -
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-02-08 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859055#comment-15859055
 ] 

Sangjin Lee commented on YARN-4985:
---

[~haibochen]

Is it possible to create the hbase-server module so that it does *not* depend 
on the hbase-client module? That's what I meant by watching out for 
dependencies in the hbase-server code. It would be ideal if the hbase-server 
code does not have to depend on the hbase-client code. It might mean splitting 
the tests and putting some in the client and others in the server.

> Refactor the coprocessor code & other definition classes into independent 
> packages
> --
>
> Key: YARN-4985
> URL: https://issues.apache.org/jira/browse/YARN-4985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-4985-YARN-5355.prelim.patch
>
>
> As part of the coprocessor deployment, we have realized that it will be much 
> cleaner to have the coprocessor code sit in a package which does not depend 
> on hadoop-yarn-server classes. It only needs hbase and other util classes.
> These util classes and tag definition related classes can be refactored into 
> their own independent "definition" class package so that making changes to 
> coprocessor code, upgrading hbase, deploying hbase on a different hadoop 
> version cluster etc all becomes operationally much easier and less error 
> prone to having different library jars etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Improve /flows API for more flexible filters fromid, collapse, userid

2017-02-08 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859028#comment-15859028
 ] 

Rohith Sharma K S commented on YARN-6027:
-

[~gtCarrera9] 
Currently, flow activities are tracked by daily basis. When user retrieves 
flows using just */flows* , it returns flow entities for the user-flowname as 
different entities. It is useful when flow activities to be tracked by daily 
basis.

But more generally,  user wants to track flows as single entity. User need his 
flows given time range. In such cases, entities retrieved can not be duplicate 
for user-flowname.  The segregation of all the flows for the combination 
user-flowname should be done at reader end. Default behavior is kept as-is with 
existing API and introducing more filters to /flows. 

Filter user/fromid will work without collapse mode also. All filters works 
independently and with combine also. I do not fully agree it is group by 
operation if you are pointing out from sql background. API still maintains the 
order of execution time.


> Improve /flows API for more flexible filters fromid, collapse, userid
> -
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4090) Make Collections.sort() more efficient in FSParentQueue.java

2017-02-08 Thread zhangshilong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859019#comment-15859019
 ] 

zhangshilong commented on YARN-4090:


Thanks [~yufeigu].   
when application finishes or its tasks finish, FSParentQueue and FSLeafQueue 
should  update resourceUsage.   Even in Preempte, resourceUsage should be 
updated. In [~xinxianyin]'s patch YARN-4090.003.patch,Preempte and  tasks 
finish  Have been considered. When creating the patch file, one of my commits 
is ignored by  mistake.   
In my thought, resourceUsage in FSParentQueue and FSLeafQueue  will be updated 
while allocating, taskComplete and Preempte.
As Messages from QA, I found unittests are needed, So I will  add unitests for 
calculating resourceUsage. 


> Make Collections.sort() more efficient in FSParentQueue.java
> 
>
> Key: YARN-4090
> URL: https://issues.apache.org/jira/browse/YARN-4090
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Xianyin Xin
>Assignee: zhangshilong
> Attachments: sampling1.jpg, sampling2.jpg, YARN-4090.001.patch, 
> YARN-4090.002.patch, YARN-4090.003.patch, YARN-4090.004.patch, 
> YARN-4090.005.patch, YARN-4090.006.patch, YARN-4090-preview.patch, 
> YARN-4090-TestResult.pdf
>
>
> Collections.sort() consumes too much time in a scheduling round.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858983#comment-15858983
 ] 

Hadoop QA commented on YARN-6125:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 316 unchanged - 2 fixed = 316 total (was 318) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
38s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
44s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6125 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851768/YARN-6125.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux b052774b037e 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 37b4acf |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14869/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
h

[jira] [Commented] (YARN-6118) Add javadoc for Resources.isNone

2017-02-08 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858978#comment-15858978
 ] 

Sunil G commented on YARN-6118:
---

+1. Committing later today if there are no other comments.

> Add javadoc for Resources.isNone
> 
>
> Key: YARN-6118
> URL: https://issues.apache.org/jira/browse/YARN-6118
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Andres Perez
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6118.002.patch, YARN-6118.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858974#comment-15858974
 ] 

Hadoop QA commented on YARN-4985:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 27 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
19s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
14s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
53s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 hadoop-yarn-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
51s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 48s{color} | {color:orange} root: The patch generated 5 new + 10 unchanged - 
5 fixed = 15 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
9s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 hadoop-yarn-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-client in the 
patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m  2s{color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
14s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-client in 
the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color}

[jira] [Commented] (YARN-5501) Container Pooling in YARN

2017-02-08 Thread Hitesh Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858926#comment-15858926
 ] 

Hitesh Sharma commented on YARN-5501:
-

[~jlowe], thanks for the great feedback and time taken to respond.

Some more details on how attach and detach container actually work.

PoolManager creates the pre-initialized containers and they are not different 
from regular containers in any real way. When ContainerManager receives a 
startContainer request then it issues a DETACH_CONTAINER event. The detach 
really exists to ensure that we can cleanup the state associated with the 
pre-init container but avoid cleaning up the resources localized. 
ContainerManager listens for CONTAINER_DETACHED event and once it receives that 
then it creates the ContainerImpl for the requested container, but passes the 
information related to the detached container as the ContainerImpl c'tor. The 
ContainerManager also follows through the regular code paths of starting the 
container, which means that resource localization happens for the new 
container, and when it comes to raising the launch event then the ContainerImpl 
instead raises the ATTACH_CONTAINER event. This allows the ContainersLauncher 
to call the attachContainer on the executor, which is where we make the choice 
of launching the other processes required for that container. I hope this helps 
clarify things a little bit more.

bq. I'm thinking of a use-case where the container is a base set that applies 
to all instances of an app framework, but each app may need a few extra things 
localized to do an app-specific thing (think UDFs for Hive/Pig, etc.). Curious 
if that is planned and how to deal with the lifecycle of those "extra" per-app 
things.

Yes, the base set of things applies to all instances of the app framework. But 
localization is still done for each instance so you can for e.g. download a set 
of binaries via pre-initialization but more job specific things can come later.

bq. So it sounds like there is a new container ID generated in the 
application's container namespace as part of the "allocation" to fill the app's 
request, but this container ID is aliased to an already existing container ID 
in another application's namespace, not only at the container executor level 
but all the way up to the container ID seen at the app level, correct?

The application gets a container ID from YARN RM and uses that for all 
purposes. On the NM we internally switch to use the pre-init container ID as 
the PID. For e.g. pre-init container had the ID container1234 while the AM 
requested container had the ID containerABCD. Even though we reuse the existing 
pre-init container1234 to service the start container request on the NM we 
never surface container1234 to the application and the app always sees 
containerABCD.

bq. One idea is to treat these things like the page cache in Linux. In other 
words, we keep a cache of idle containers as apps run them. These containers, 
like page cache entries, will be quickly discarded if they are unused and we 
need to make room for other containers. We're simply caching successful 
containers that have been run on the cluster, ready to run another task just 
like it. Apps would still need to make some tweaks to their container code so 
it talks the yet-to-be-detailed-and-mysterious attach/detach protocol so they 
can participate in this automatic container cache, and there would need to be 
changes in how containers are requested so the RM can properly match a request 
to an existing container (something that already has to be done for any reuse 
approach). Seems like it would adapt well to shifting loads on the cluster and 
doesn't require a premeditated, static config by users to get their app load to 
benefit. Has something like that been considered?

That is a very interesting idea. If the app can provide some hints as to when 
it is good to consider a container pre-initialized then when the container 
finishes we can carry out the required operations to go back to the pre-init 
state. Thanks for bringing this up.

bq. I think that's going to be challenging for the apps in practice and will 
limit which apps can leverage this feature reliably. This is going to be 
challenging for containers runniing VMs whose memory limits need to be setup at 
startup (e.g.: JVMs). Minimally I think this feature needs a way for apps to 
specify that they do not have a way to communicate (or at least act upon) 
memory changes. In those cases YARN will have to decide on tradeoffs like a 
primed-but-oversized container that will run fast but waste grid resources and 
also avoid reusing a container that needs to grow to satisfy the app 
request.

Hmm..let me look at the code and see how container resizing works today. What 
you are saying makes sense, but in that case container resizing won't work as 
well. For our scenarios resourc

[jira] [Updated] (YARN-6125) The application attempt's diagnostic message should have a maximum size

2017-02-08 Thread Andras Piros (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Piros updated YARN-6125:
---
Attachment: YARN-6125.006.patch

Small Javadoc fix.

> The application attempt's diagnostic message should have a maximum size
> ---
>
> Key: YARN-6125
> URL: https://issues.apache.org/jira/browse/YARN-6125
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Andras Piros
>Priority: Critical
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-6125.000.patch, YARN-6125.001.patch, 
> YARN-6125.002.patch, YARN-6125.003.patch, YARN-6125.004.patch, 
> YARN-6125.005.patch, YARN-6125.006.patch
>
>
> We've found through experience that the diagnostic message can grow 
> unbounded.  I've seen attempts that have diagnostic messages over 1MB.  Since 
> the message is stored in the state store, it's a bad idea to allow the 
> message to grow unbounded.  Instead, there should be a property that sets a 
> maximum size on the message.
> I suspect that some of the ZK state store issues we've seen in the past were 
> due to the size of the diagnostic messages and not to the size of the 
> classpath, as is the current prevailing opinion.
> An open question is how best to prune the message once it grows too large.  
> Should we
> # truncate the tail,
> # truncate the head,
> # truncate the middle,
> # add another property to make the behavior selectable, or
> # none of the above?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6144) FairScheduler: preempted resources can become negative

2017-02-08 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858836#comment-15858836
 ] 

Karthik Kambatla commented on YARN-6144:


Great. Patch looks good. 

One other minor comment: should we add a similar if check in 
{{trackContainerForPreemption}} before we increment {{preemptedResources}}?

> FairScheduler: preempted resources can become negative
> --
>
> Key: YARN-6144
> URL: https://issues.apache.org/jira/browse/YARN-6144
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Blocker
> Attachments: Screen Shot 2017-02-02 at 12.49.14 PM.png, 
> YARN-6144.000.patch, YARN-6144.001.patch, YARN-6144.002.patch
>
>
> {{preemptContainers()}} calls {{trackContainerForPreemption()}} to collect 
> the list of containers and resources that were preempted for an application. 
> Later the list is reduced when {{containerCompleted()}} calls 
> {{untrackContainerForPreemption()}}. The bug is that the resource variable 
> {{preemptedResources}} is subtracted, not just when the container was 
> preempted but also when it has completed successfully. This causes that we 
> return an incorrect value in {{getResourceUsage()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4212) FairScheduler: Parent queues is not allowed to be 'Fair' policy if its children have the "drf" policy

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858798#comment-15858798
 ] 

Hadoop QA commented on YARN-4212:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 267 unchanged - 6 fixed = 270 total (was 273) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 908 unchanged - 0 fixed = 909 total (was 908) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4212 |
| GITHUB PR | https://github.com/apache/hadoop/pull/181 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3f4587ca7092 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 37b4acf |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14867/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/14867/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14867/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YAR

[jira] [Commented] (YARN-6118) Add javadoc for Resources.isNone

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858748#comment-15858748
 ] 

Hadoop QA commented on YARN-6118:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6118 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851740/YARN-6118.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3c9622533229 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 37b4acf |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/14866/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/14866/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add javadoc for Resources.isNone
> 
>
> Key: YARN-6118
> URL: https://issues.apache.org/jira/browse/YARN-6118
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Andres Perez
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-611

[jira] [Commented] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-02-08 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858743#comment-15858743
 ] 

Haibo Chen commented on YARN-4985:
--

Uploading a patch for preliminary review. The hbase-backend module has been 
split into two separate modules based on the versions of 
hadoop-common/hdfs/auth that they need to run under. Coprocessor code and code 
in hbase-tests are now in the same module, that is, 
*timelineservice-hbase-server, which depends on *timelineservice-hbase-client 
that includes all table schema and hbase code executed in YARN trunk. YARN-6094 
has enabled dynamic loading of coprocessor from HDFS, now if coprocessor code 
lives in a hbase-server jar that depends on hbase-client jar, I don't think the 
dynamic loading from HDFS will work unless HBase somehow knows how to pull in 
the dependent jars. Thoughts on workaround, [~sjlee0]? In the meantime, I will 
continue to verify maven dependencies to make sure they are clean.

> Refactor the coprocessor code & other definition classes into independent 
> packages
> --
>
> Key: YARN-4985
> URL: https://issues.apache.org/jira/browse/YARN-4985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-4985-YARN-5355.prelim.patch
>
>
> As part of the coprocessor deployment, we have realized that it will be much 
> cleaner to have the coprocessor code sit in a package which does not depend 
> on hadoop-yarn-server classes. It only needs hbase and other util classes.
> These util classes and tag definition related classes can be refactored into 
> their own independent "definition" class package so that making changes to 
> coprocessor code, upgrading hbase, deploying hbase on a different hadoop 
> version cluster etc all becomes operationally much easier and less error 
> prone to having different library jars etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2017-02-08 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-4985:
-
Attachment: YARN-4985-YARN-5355.prelim.patch

> Refactor the coprocessor code & other definition classes into independent 
> packages
> --
>
> Key: YARN-4985
> URL: https://issues.apache.org/jira/browse/YARN-4985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
> Attachments: YARN-4985-YARN-5355.prelim.patch
>
>
> As part of the coprocessor deployment, we have realized that it will be much 
> cleaner to have the coprocessor code sit in a package which does not depend 
> on hadoop-yarn-server classes. It only needs hbase and other util classes.
> These util classes and tag definition related classes can be refactored into 
> their own independent "definition" class package so that making changes to 
> coprocessor code, upgrading hbase, deploying hbase on a different hadoop 
> version cluster etc all becomes operationally much easier and less error 
> prone to having different library jars etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6059) Update paused container state in the state store

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858695#comment-15858695
 ] 

Hadoop QA commented on YARN-6059:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
 9s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
34s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
52s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
35s{color} | {color:green} YARN-5972 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} YARN-5972 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
25s{color} | {color:green} YARN-5972 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 41s{color} | {color:orange} root: The patch generated 7 new + 305 unchanged 
- 0 fixed = 312 total (was 305) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  4m 
59s{color} | {color:red} root generated 1 new + 11181 unchanged - 0 fixed = 
11182 total (was 11181) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 31s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
37s{color} | {color:red} The patch generated 4 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}224m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFsHdfs |
|   | hadoop.hdfs.server.mover.TestMover |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
|   | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6059 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851714/YARN-6059-YARN-5972.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 452f7b0aad68 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision |

[jira] [Commented] (YARN-6113) re-direct NM Web Service to get container logs for finished applications

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858674#comment-15858674
 ] 

Hadoop QA commented on YARN-6113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 227 unchanged - 2 fixed = 228 total (was 229) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 52s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
11s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestTimelineClientV2Impl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6113 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851737/YARN-6113.trunk.v4.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux c5d3eb7986cb 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 37b4acf |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreComm

[jira] [Updated] (YARN-4212) FairScheduler: Parent queues is not allowed to be 'Fair' policy if its children have the "drf" policy

2017-02-08 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4212:
---
Attachment: YARN-4212.009.patch

> FairScheduler: Parent queues is not allowed to be 'Fair' policy if its 
> children have the "drf" policy
> -
>
> Key: YARN-4212
> URL: https://issues.apache.org/jira/browse/YARN-4212
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Yufei Gu
>  Labels: fairscheduler
> Attachments: YARN-4212.002.patch, YARN-4212.003.patch, 
> YARN-4212.004.patch, YARN-4212.005.patch, YARN-4212.006.patch, 
> YARN-4212.007.patch, YARN-4212.008.patch, YARN-4212.009.patch, 
> YARN-4212.1.patch
>
>
> The Fair Scheduler, while performing a {{recomputeShares()}} during an 
> {{update()}} call, uses the parent queues policy to distribute shares to its 
> children.
> If the parent queues policy is 'fair', it only computes weight for memory and 
> sets the vcores fair share of its children to 0.
> Assuming a situation where we have 1 parent queue with policy 'fair' and 
> multiple leaf queues with policy 'drf', Any app submitted to the child queues 
> with vcore requirement > 1 will always be above fairshare, since during the 
> recomputeShare process, the child queues were all assigned 0 for fairshare 
> vcores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4212) FairScheduler: Parent queues is not allowed to be 'Fair' policy if its children have the "drf" policy

2017-02-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858673#comment-15858673
 ] 

ASF GitHub Bot commented on YARN-4212:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/181#discussion_r100194505
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestSchedulingPolicy.java
 ---
@@ -79,66 +79,6 @@ public void testParseSchedulingPolicy()
   }
 
   /**
-   * Trivial tests that make sure
-   * {@link SchedulingPolicy#isApplicableTo(SchedulingPolicy, byte)} works 
as
-   * expected for the possible values of depth
-   * 
-   * @throws AllocationConfigurationException
-   */
-  @Test(timeout = 1000)
-  public void testIsApplicableTo() throws AllocationConfigurationException 
{
--- End diff --

Add a new test case to check if fifo policy is only for leaf queues.


> FairScheduler: Parent queues is not allowed to be 'Fair' policy if its 
> children have the "drf" policy
> -
>
> Key: YARN-4212
> URL: https://issues.apache.org/jira/browse/YARN-4212
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Yufei Gu
>  Labels: fairscheduler
> Attachments: YARN-4212.002.patch, YARN-4212.003.patch, 
> YARN-4212.004.patch, YARN-4212.005.patch, YARN-4212.006.patch, 
> YARN-4212.007.patch, YARN-4212.008.patch, YARN-4212.1.patch
>
>
> The Fair Scheduler, while performing a {{recomputeShares()}} during an 
> {{update()}} call, uses the parent queues policy to distribute shares to its 
> children.
> If the parent queues policy is 'fair', it only computes weight for memory and 
> sets the vcores fair share of its children to 0.
> Assuming a situation where we have 1 parent queue with policy 'fair' and 
> multiple leaf queues with policy 'drf', Any app submitted to the child queues 
> with vcore requirement > 1 will always be above fairshare, since during the 
> recomputeShare process, the child queues were all assigned 0 for fairshare 
> vcores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4212) FairScheduler: Parent queues is not allowed to be 'Fair' policy if its children have the "drf" policy

2017-02-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858672#comment-15858672
 ] 

ASF GitHub Bot commented on YARN-4212:
--

Github user flyrain commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/181#discussion_r100194434
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
 ---
@@ -463,4 +461,33 @@ boolean fitsInMaxShare(Resource additionalResource) {
 }
 return true;
   }
+
+  /**
+   * Recursively check policies for queues in pre-order. Get queue policies
+   * from the allocation file instead of properties of {@link FSQueue} 
objects.
+   * Set the policy for current queue if there is no policy violation for 
its
+   * children.
+   *
+   * @param queueConf allocation configuration
+   * @return true if no policy violation and successfully set polices
+   * for queues; false otherwise
+   */
+  public boolean verifyAndSetPolicyFromConf(AllocationConfiguration 
queueConf) {
--- End diff --

Fixed.


> FairScheduler: Parent queues is not allowed to be 'Fair' policy if its 
> children have the "drf" policy
> -
>
> Key: YARN-4212
> URL: https://issues.apache.org/jira/browse/YARN-4212
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Yufei Gu
>  Labels: fairscheduler
> Attachments: YARN-4212.002.patch, YARN-4212.003.patch, 
> YARN-4212.004.patch, YARN-4212.005.patch, YARN-4212.006.patch, 
> YARN-4212.007.patch, YARN-4212.008.patch, YARN-4212.1.patch
>
>
> The Fair Scheduler, while performing a {{recomputeShares()}} during an 
> {{update()}} call, uses the parent queues policy to distribute shares to its 
> children.
> If the parent queues policy is 'fair', it only computes weight for memory and 
> sets the vcores fair share of its children to 0.
> Assuming a situation where we have 1 parent queue with policy 'fair' and 
> multiple leaf queues with policy 'drf', Any app submitted to the child queues 
> with vcore requirement > 1 will always be above fairshare, since during the 
> recomputeShare process, the child queues were all assigned 0 for fairshare 
> vcores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Improve /flows API for more flexible filters fromid, collapse, userid

2017-02-08 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858664#comment-15858664
 ] 

Li Lu commented on YARN-6027:
-

Thanks for the patch [~rohithsharma]! One big picture question: I'm still not 
100% sure the meaning of "collapse". Seems like the use case behind this is to 
list all flow activities for a certain user, or group flow activities by user? 
If this is the case, maybe we want some parameters like groupby=user or groupby 
= userflow for future improvements? 

> Improve /flows API for more flexible filters fromid, collapse, userid
> -
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5501) Container Pooling in YARN

2017-02-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858624#comment-15858624
 ] 

Jason Lowe commented on YARN-5501:
--

bq. As part of the detachContainer all the resources associated with the 
pre-initialized container are now associated with the new container and get 
cleaned up accordingly.

Sorry, I got lost here.  A detachContainer does an attach for a specific 
application?  That seems backwards.  Also I'm not sure what is meant by "get 
cleaned up accordingly."

The cleanup comment does make me wonder what is planned when the app's 
container request is "close but not exact" to a preinitialized container.  Is 
the plan to only support exact matches?  I'm thinking of a use-case where the 
container is a base set that applies to all instances of an app framework, but 
each app may need a few extra things localized to do an app-specific thing 
(think UDFs for Hive/Pig, etc.).  Curious if that is planned and how to deal 
with the lifecycle of those "extra" per-app things.

bq. as part of container attach and detach we update this mapping so that it 
now looks like newcontainer456=../container13.pidfile

So it sounds like there is a new container ID generated in the application's 
container namespace as part of the "allocation" to fill the app's request, but 
this container ID is aliased to an already existing container ID in another 
application's namespace, not only at the container executor level but all the 
way up to the container ID seen at the app level, correct?

bq. One of the thoughts we have had is that pre-init containers can be 
considered opportunistic which means they can get killed in favor of other 
containers, but if they do get used then they take the mantle of the new 
container.

One idea is to treat these things like the page cache in Linux.  In other 
words, we keep a cache of idle containers as apps run them.  These containers, 
like page cache entries, will be quickly discarded if they are unused and we 
need to make room for other containers.  We're simply caching successful 
containers that have been run on the cluster, ready to run another task just 
like it.  Apps would still need to make some tweaks to their container code so 
it talks the yet-to-be-detailed-and-mysterious attach/detach protocol so they 
can participate in this automatic container cache, and there would need to be 
changes in how containers are requested so the RM can properly match a request 
to an existing container (something that already has to be done for any reuse 
approach).  Seems like it would adapt well to shifting loads on the cluster and 
doesn't require a premeditated, static config by users to get their app load to 
benefit.  Has something like that been considered?

bq. The resizing is simply at the job object or cgroup level and we expect the 
application to have it's own communication channel to talk with the processes 
that are started a priori.

I think that's going to be challenging for the apps in practice and will limit 
which apps can leverage this feature reliably.  This is going to be challenging 
for containers runniing VMs whose memory limits need to be setup at startup 
(e.g.: JVMs).  Minimally I think this feature needs a way for apps to specify 
that they do not have a way to communicate (or at least act upon) memory 
changes.  In those cases YARN will have to decide on tradeoffs like a 
primed-but-oversized container that will run fast but waste grid resources and 
also avoid reusing a container that needs to grow to satisfy the app request.

Also the container is already talking this yet-to-be-detailed attach/detach 
protocol, so I would expect any memory change request to also arrive via that 
communication channel.  Why isn't that the case?

bq. Currently we start the pre-init containers by skipping some of the security 
checks done in the "Container Manager". I think we can instead configure the 
user with which pre-init containers should be started and then associate them 
with the actual application.

Making sure we don't mix users is the most basic step, but there's still the 
issue of credentials.  There needs to be a way to convey app-specific 
credentials to these containers and make sure they don't leak between apps.  
The security design should be addressed sooner rather than later, because it's 
going to be difficult to patch it in after the fact.

It sounds like you already have a working PoC and scenarios for it.  These 
would be great to detail via flow/message sequence diagrams detailing the 
operation order for container init, attach, detach, restart, etc.  It would 
also be great to detail what changes apps using this feature will see over what 
they do today (i.e.: if there's something changing re: container IDs, container 
killing, etc.) and what changes are required on their part in order to 
participate.

> Container Pooling in YARN

[jira] [Commented] (YARN-6113) re-direct NM Web Service to get container logs for finished applications

2017-02-08 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858620#comment-15858620
 ] 

Xuan Gong commented on YARN-6113:
-

[~djp]
Uploaded a new patch to address the comments

> re-direct NM Web Service to get container logs for finished applications
> 
>
> Key: YARN-6113
> URL: https://issues.apache.org/jira/browse/YARN-6113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6113.branch-2.v1.patch, 
> YARN-6113.branch-2.v2.patch, YARN-6113.branch-2.v3.patch, 
> YARN-6113.branch-2.v4.patch, YARN-6113.trunk.v2.patch, 
> YARN-6113.trunk.v3.patch, YARN-6113.trunk.v4.patch
>
>
> In NM web ui, when we try to get container logs for finished application, it 
> would redirect to the log server based on the configuration: 
> yarn.log.server.url. We should do the similar thing for NM WebService



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6164) Expose maximum-am-resource-percent to Hadoop clients

2017-02-08 Thread Benson Qiu (JIRA)
Benson Qiu created YARN-6164:


 Summary: Expose maximum-am-resource-percent to Hadoop clients
 Key: YARN-6164
 URL: https://issues.apache.org/jira/browse/YARN-6164
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.7.2
Reporter: Benson Qiu


To my knowledge, there is no way to programmatically obtain 
`yarn.scheduler.capacity.maximum-applications`.

I've tried looking at 
[YarnClient|https://hadoop.apache.org/docs/r2.7.2/api/index.html?org/apache/hadoop/yarn/client/api/YarnClient.html],
 [ResourceManager REST 
APIs|https://hadoop.apache.org/docs/r2.7.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html],
 and 
[YarnCommands|https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YarnCommands.html]
 (via bin/yarn.sh) 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6118) Add javadoc for Resources.isNone

2017-02-08 Thread Andres Perez (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres Perez updated YARN-6118:
---
Attachment: YARN-6118.002.patch

I have corrected the wording as suggested

> Add javadoc for Resources.isNone
> 
>
> Key: YARN-6118
> URL: https://issues.apache.org/jira/browse/YARN-6118
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Andres Perez
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6118.002.patch, YARN-6118.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6113) re-direct NM Web Service to get container logs for finished applications

2017-02-08 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6113:

Attachment: YARN-6113.trunk.v4.patch

> re-direct NM Web Service to get container logs for finished applications
> 
>
> Key: YARN-6113
> URL: https://issues.apache.org/jira/browse/YARN-6113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6113.branch-2.v1.patch, 
> YARN-6113.branch-2.v2.patch, YARN-6113.branch-2.v3.patch, 
> YARN-6113.branch-2.v4.patch, YARN-6113.trunk.v2.patch, 
> YARN-6113.trunk.v3.patch, YARN-6113.trunk.v4.patch
>
>
> In NM web ui, when we try to get container logs for finished application, it 
> would redirect to the log server based on the configuration: 
> yarn.log.server.url. We should do the similar thing for NM WebService



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6137) Yarn client implicitly invoke ATS client which accesses HDFS

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858575#comment-15858575
 ] 

Hudson commented on YARN-6137:
--

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #11223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11223/])
YARN-6137. Yarn client implicitly invoke ATS client which accesses HDFS. 
(jlowe: rev 37b4acf7cee1f05599a84bbb1ebf07979a71f82f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestYarnClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java


> Yarn client implicitly invoke ATS client which accesses HDFS
> 
>
> Key: YARN-6137
> URL: https://issues.apache.org/jira/browse/YARN-6137
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Li Lu
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6137-trunk.001.patch, YARN-6137-trunk.002.patch
>
>
> Yarn is implicitly trying to invoke ATS Client even though client does not 
> need it. and ATSClient code is trying to access hdfs. Due to that service is 
> hitting GSS exception. 
> Yarnclient is implicitly creating ats client that tries to access Hdfs.
> All servers that use yarnclient cannot be expected to change to accommodate 
> this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6113) re-direct NM Web Service to get container logs for finished applications

2017-02-08 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-6113:

Attachment: YARN-6113.branch-2.v4.patch

> re-direct NM Web Service to get container logs for finished applications
> 
>
> Key: YARN-6113
> URL: https://issues.apache.org/jira/browse/YARN-6113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6113.branch-2.v1.patch, 
> YARN-6113.branch-2.v2.patch, YARN-6113.branch-2.v3.patch, 
> YARN-6113.branch-2.v4.patch, YARN-6113.trunk.v2.patch, 
> YARN-6113.trunk.v3.patch
>
>
> In NM web ui, when we try to get container logs for finished application, it 
> would redirect to the log server based on the configuration: 
> yarn.log.server.url. We should do the similar thing for NM WebService



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6061) Add a customized uncaughtexceptionhandler for critical threads in RM

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858571#comment-15858571
 ] 

Hadoop QA commented on YARN-6061:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 117 unchanged - 1 fixed = 120 total (was 118) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m  
7s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNodeLabelUpdate
 |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6061 |
| GITHUB PR | https://github.com/apache/hadoop/pull/182 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8ba9634254bb 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / eec52e1 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14864/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14864/artifact/

[jira] [Commented] (YARN-5501) Container Pooling in YARN

2017-02-08 Thread Hitesh Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858564#comment-15858564
 ] 

Hitesh Sharma commented on YARN-5501:
-

Hi [~jlowe],

First off all a big thanks for taking the time to look at the document and 
sharing your thoughts. I appreciate it a lot.

bq. I am confused on how this will be used in practice.  To me pre-initialized 
containers means containers that have already started up with application- or 
framework-specific resources localized, processes have been launched using 
those resources, and potentially connections already negotiated to external 
services.  I'm not sure how YARN is supposed to know what mix of local 
resources, users, and configs to use for preinitialized containers that will 
get a good "hit rate" on container requests.  Maybe I'm misunderstanding what 
is really meant by "preinitialized," and some concrete, sample use cases with 
detailed walkthroughs of how they work in practice would really help 
crystallize the goals here.

Your understanding of pre-initialized containers is correct here. In the 
proposed design YARN RM has the config to start pre-initialized containers and 
this config is pretty much a launch context, which contains launch commands, 
details of resources to localize, and we also provide the resource constraints 
with which the container should be started. This configuration is currently 
static, but in the future we intend to this to be pluggable, so we can extend 
it to be dynamic and adjust based on cluster load.

The first use case happens to be a scenario where each of the container needs 
to start some processes that take a lot of time to initialize (localization and 
startup costs). YARN NM receives the config to start the pre-initialized 
container (there is a dummy application that is associated with the pre-init 
container for a specific application) and it follows the regular code paths for 
a container which includes localizing resources and launching the container. As 
you know, in YARN a container goes to RUNNING state once started, but a 
pre-initialized container instead goes to PREINITIALIZED state (there are some 
hooks which allow us to know that the container has initialized properly). From 
this point the container is not different from a regular container as the 
container monitor is overlooking it. The "Pool Manager" within YARN NM is used 
to start the pre-initialized container and watches for container events like 
stop in which case it simply tries to start it again. In other words at the 
moment we simply use YARN RM to pick the nodes where pre-initialized container 
should be started and let the "Pool Manager" in the NM manage the lifecycle of 
the container.

When the AM for which we pre-initialized the container comes and asks for this 
container then the "Container Manager" takes the pre-initialized container by 
issuing a "detach" container event and "attaches" it to the application. We 
added attachContainer and detachContainer events into ContainerExecutor which 
allow us to define what they mean. As an example, in attachContainer we start a 
new process within the cgroup of pre-initialized container. The PID to 
container mapping within the ContainerExecutor is updated to reflect everything 
accordingly (pre-initialized containers have a different container ID and 
belong to a different application before they are taken up). As part of the 
detachContainer all the resources associated with the pre-initialized container 
are now associated with the new container and get cleaned up accordingly.

The other use case where we have prototyped container pooling is the scenario 
where a container actually needs to be a Virtual Machine. Creation of VMs can 
take a long time thus container pooling allows us to keep the empty VM shells 
ready to go.

bq. Reusing containers across different applications is going to create some 
interesting scenarios that don't exist today.  For example, what does a 
container ID for one of these looks like?  How many things today assume that 
all container IDs for an application are essentially prefixed by the 
application ID?  This would violate that assumption, unless we introduce some 
sort of container ID aliasing where we create a "fake" container ID that maps 
to the "real" ID of the reused container.  It would be good to know how we're 
going to treat container IDs and what applications will see when they get one 
of these containers in response to their allocation request.

All pre-initialized containers belong to a specific application type. There is 
a dummy application created to which the pre-initialized container are mapped. 
As part of containerAttach and containerDetach event we disassociate the 
containers between application. Specifically ContainerExecutor has a mapping of 
container ID to PID file and as part of container detach we update this 
mappi

[jira] [Commented] (YARN-6137) Yarn client implicitly invoke ATS client which accesses HDFS

2017-02-08 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858544#comment-15858544
 ] 

Li Lu commented on YARN-6137:
-

Thanks [~jlowe] for the review and commit! 

> Yarn client implicitly invoke ATS client which accesses HDFS
> 
>
> Key: YARN-6137
> URL: https://issues.apache.org/jira/browse/YARN-6137
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Li Lu
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-6137-trunk.001.patch, YARN-6137-trunk.002.patch
>
>
> Yarn is implicitly trying to invoke ATS Client even though client does not 
> need it. and ATSClient code is trying to access hdfs. Due to that service is 
> hitting GSS exception. 
> Yarnclient is implicitly creating ats client that tries to access Hdfs.
> All servers that use yarnclient cannot be expected to change to accommodate 
> this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6137) Yarn client implicitly invoke ATS client which accesses HDFS

2017-02-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858535#comment-15858535
 ] 

Jason Lowe commented on YARN-6137:
--

+1 lgtm.  Committing this.

> Yarn client implicitly invoke ATS client which accesses HDFS
> 
>
> Key: YARN-6137
> URL: https://issues.apache.org/jira/browse/YARN-6137
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yesha Vora
>Assignee: Li Lu
> Attachments: YARN-6137-trunk.001.patch, YARN-6137-trunk.002.patch
>
>
> Yarn is implicitly trying to invoke ATS Client even though client does not 
> need it. and ATSClient code is trying to access hdfs. Due to that service is 
> hitting GSS exception. 
> Yarnclient is implicitly creating ats client that tries to access Hdfs.
> All servers that use yarnclient cannot be expected to change to accommodate 
> this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6061) Add a customized uncaughtexceptionhandler for critical threads in RM

2017-02-08 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6061:
---
Attachment: YARN-6061.008.patch

> Add a customized uncaughtexceptionhandler for critical threads in RM
> 
>
> Key: YARN-6061
> URL: https://issues.apache.org/jira/browse/YARN-6061
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6061.001.patch, YARN-6061.002.patch, 
> YARN-6061.003.patch, YARN-6061.004.patch, YARN-6061.005.patch, 
> YARN-6061.006.patch, YARN-6061.007.patch, YARN-6061.008.patch
>
>
> There are several threads in fair scheduler. The thread will quit when there 
> is a runtime exception inside it. We should bring down the RM when that 
> happens. Otherwise, there may be some weird behavior in RM. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6061) Add a customized uncaughtexceptionhandler for critical threads in RM

2017-02-08 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6061:
---
Attachment: (was: YARN-6061.008.patch)

> Add a customized uncaughtexceptionhandler for critical threads in RM
> 
>
> Key: YARN-6061
> URL: https://issues.apache.org/jira/browse/YARN-6061
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6061.001.patch, YARN-6061.002.patch, 
> YARN-6061.003.patch, YARN-6061.004.patch, YARN-6061.005.patch, 
> YARN-6061.006.patch, YARN-6061.007.patch, YARN-6061.008.patch
>
>
> There are several threads in fair scheduler. The thread will quit when there 
> is a runtime exception inside it. We should bring down the RM when that 
> happens. Otherwise, there may be some weird behavior in RM. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6065) Update javadocs of new added APIs / classes of scheduler/AppSchedulingInfo

2017-02-08 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858440#comment-15858440
 ] 

Karthik Kambatla commented on YARN-6065:


I can attest to the fact that the code is much harder to understand. Any 
documentation would greatly help. I am happy to contribute through thorough 
reviews on this. :)

> Update javadocs of new added APIs / classes of scheduler/AppSchedulingInfo
> --
>
> Key: YARN-6065
> URL: https://issues.apache.org/jira/browse/YARN-6065
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-08 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858402#comment-15858402
 ] 

Karthik Kambatla commented on YARN-6163:


Discussed this offline with [~templedf]. 

The following approach seems reasonable. Appreciate any inputs from others.
# When processing a starved application, preempt for enough ResourceRequests 
that correspond to the current starvation.
# Track the containers (and the amount of resources) that are being starved for 
this application. 
# Mark the app starved again, only after all the marked containers are 
preempted and a delay (twice the node heartbeat interval) has passed.

> FS Preemption is a trickle for severely starved applications
> 
>
> Key: YARN-6163
> URL: https://issues.apache.org/jira/browse/YARN-6163
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>
> With current logic, only one RR is considered per each instance of marking an 
> application starved. This marking happens only on the update call that runs 
> every 500ms.  Due to this, an application that is severely starved takes 
> forever to reach fairshare based on preemptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6163) FS Preemption is a trickle for severely starved applications

2017-02-08 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-6163:
--

 Summary: FS Preemption is a trickle for severely starved 
applications
 Key: YARN-6163
 URL: https://issues.apache.org/jira/browse/YARN-6163
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: fairscheduler
Affects Versions: 2.9.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


With current logic, only one RR is considered per each instance of marking an 
application starved. This marking happens only on the update call that runs 
every 500ms.  Due to this, an application that is severely starved takes 
forever to reach fairshare based on preemptions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6061) Add a customized uncaughtexceptionhandler for critical threads in RM

2017-02-08 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6061:
---
Attachment: YARN-6061.008.patch

> Add a customized uncaughtexceptionhandler for critical threads in RM
> 
>
> Key: YARN-6061
> URL: https://issues.apache.org/jira/browse/YARN-6061
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6061.001.patch, YARN-6061.002.patch, 
> YARN-6061.003.patch, YARN-6061.004.patch, YARN-6061.005.patch, 
> YARN-6061.006.patch, YARN-6061.007.patch, YARN-6061.008.patch
>
>
> There are several threads in fair scheduler. The thread will quit when there 
> is a runtime exception inside it. We should bring down the RM when that 
> happens. Otherwise, there may be some weird behavior in RM. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6144) FairScheduler: preempted resources can become negative

2017-02-08 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858382#comment-15858382
 ] 

Miklos Szegedi commented on YARN-6144:
--

I cannot repro 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer locally.

> FairScheduler: preempted resources can become negative
> --
>
> Key: YARN-6144
> URL: https://issues.apache.org/jira/browse/YARN-6144
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Blocker
> Attachments: Screen Shot 2017-02-02 at 12.49.14 PM.png, 
> YARN-6144.000.patch, YARN-6144.001.patch, YARN-6144.002.patch
>
>
> {{preemptContainers()}} calls {{trackContainerForPreemption()}} to collect 
> the list of containers and resources that were preempted for an application. 
> Later the list is reduced when {{containerCompleted()}} calls 
> {{untrackContainerForPreemption()}}. The bug is that the resource variable 
> {{preemptedResources}} is subtracted, not just when the container was 
> preempted but also when it has completed successfully. This causes that we 
> return an incorrect value in {{getResourceUsage()}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6056) Yarn NM using LCE shows a failure when trying to delete a non-existing dir

2017-02-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858379#comment-15858379
 ] 

Varun Saxena commented on YARN-6056:


Sorry I had missed this one. Will commit it by tomorrow.

> Yarn NM using LCE shows a failure when trying to delete a non-existing dir
> --
>
> Key: YARN-6056
> URL: https://issues.apache.org/jira/browse/YARN-6056
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.6.4
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-6056-branch-2.6.01.patch, 
> YARN-6056-branch-2.6.1.patch
>
>
> As part of YARN-2902 the clean up of the local directories was changed to 
> ignore non existing directories and proceed with others in the list. This 
> part of the code change was not backported into branch-2.6, backporting just 
> that part now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6162) Cluster Writeable APIs in RM Rest APIs page refers to APIs as alpha but they are already in use with Spark

2017-02-08 Thread Grant Sohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Sohn reassigned YARN-6162:


Assignee: Grant Sohn

> Cluster Writeable APIs in RM Rest APIs page refers to APIs as alpha but they 
> are already in use with Spark
> --
>
> Key: YARN-6162
> URL: https://issues.apache.org/jira/browse/YARN-6162
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, site
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
>
> Excerpt with documentation that should be removed.
> {quote}
> Cluster Writeable APIs
> The setions below refer to APIs which allow to create and modify 
> applications. -These APIs are currently in alpha and may change in the 
> future.-
> Cluster New Application API
> With the New Application API, you can obtain an application-id which can then 
> be used as part of the Cluster Submit Applications API to submit 
> applications. The response also includes the maximum resource capabilities 
> available on the cluster.
> -This feature is currently in the alpha stage and may change in the future.-
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6059) Update paused container state in the state store

2017-02-08 Thread Hitesh Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858366#comment-15858366
 ] 

Hitesh Sharma edited comment on YARN-6059 at 2/8/17 6:37 PM:
-

[~asuresh], the patch is renamed.


was (Author: hrsharma):
Renaming the patch.

> Update paused container state in the state store
> 
>
> Key: YARN-6059
> URL: https://issues.apache.org/jira/browse/YARN-6059
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5216-YARN-6059.001.patch, 
> YARN-6059-YARN-5972.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6059) Update paused container state in the state store

2017-02-08 Thread Hitesh Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Sharma updated YARN-6059:

Attachment: YARN-6059-YARN-5972.001.patch

Renaming the patch.

> Update paused container state in the state store
> 
>
> Key: YARN-6059
> URL: https://issues.apache.org/jira/browse/YARN-6059
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5216-YARN-6059.001.patch, 
> YARN-6059-YARN-5972.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6059) Update paused container state in the state store

2017-02-08 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858361#comment-15858361
 ] 

Arun Suresh commented on YARN-6059:
---

[~hrsharma], can you rename the patch to YARN-6059-YARN-5972.001.patch ?

> Update paused container state in the state store
> 
>
> Key: YARN-6059
> URL: https://issues.apache.org/jira/browse/YARN-6059
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Hitesh Sharma
>Assignee: Hitesh Sharma
> Attachments: YARN-5216-YARN-6059.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6162) Cluster Writeable APIs in RM Rest APIs page refers to APIs as alpha but they are already in use with Spark

2017-02-08 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858358#comment-15858358
 ] 

Grant Sohn commented on YARN-6162:
--

Here's a link showing usage in Spark.

https://community.hortonworks.com/articles/28070/starting-spark-jobs-directly-via-yarn-rest-api.html

> Cluster Writeable APIs in RM Rest APIs page refers to APIs as alpha but they 
> are already in use with Spark
> --
>
> Key: YARN-6162
> URL: https://issues.apache.org/jira/browse/YARN-6162
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, site
>Reporter: Grant Sohn
>Priority: Trivial
>
> Excerpt with documentation that should be removed.
> {quote}
> Cluster Writeable APIs
> The setions below refer to APIs which allow to create and modify 
> applications. -These APIs are currently in alpha and may change in the 
> future.-
> Cluster New Application API
> With the New Application API, you can obtain an application-id which can then 
> be used as part of the Cluster Submit Applications API to submit 
> applications. The response also includes the maximum resource capabilities 
> available on the cluster.
> -This feature is currently in the alpha stage and may change in the future.-
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6162) Cluster Writeable APIs in RM Rest APIs page refers to APIs as alpha but they are already in use with Spark

2017-02-08 Thread Grant Sohn (JIRA)
Grant Sohn created YARN-6162:


 Summary: Cluster Writeable APIs in RM Rest APIs page refers to 
APIs as alpha but they are already in use with Spark
 Key: YARN-6162
 URL: https://issues.apache.org/jira/browse/YARN-6162
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation, site
Reporter: Grant Sohn
Priority: Trivial


Excerpt with documentation that should be removed.

{quote}
Cluster Writeable APIs

The setions below refer to APIs which allow to create and modify applications. 
-These APIs are currently in alpha and may change in the future.-

Cluster New Application API

With the New Application API, you can obtain an application-id which can then 
be used as part of the Cluster Submit Applications API to submit applications. 
The response also includes the maximum resource capabilities available on the 
cluster.

-This feature is currently in the alpha stage and may change in the future.-
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6161) YARN support for port allocation

2017-02-08 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-6161:


 Summary: YARN support for port allocation
 Key: YARN-6161
 URL: https://issues.apache.org/jira/browse/YARN-6161
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
 Fix For: yarn-native-services


Since there is no agent code in YARN native services, we need another mechanism 
for allocating ports to containers. This is not necessary when running Docker 
containers, but it will become important when an agent-less docker-less 
provider is introduced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6056) Yarn NM using LCE shows a failure when trying to delete a non-existing dir

2017-02-08 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858325#comment-15858325
 ] 

Jonathan Hung commented on YARN-6056:
-

Hi [~varun_saxena], any chance we can get this committed to branch-2.6?

> Yarn NM using LCE shows a failure when trying to delete a non-existing dir
> --
>
> Key: YARN-6056
> URL: https://issues.apache.org/jira/browse/YARN-6056
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.6.4
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-6056-branch-2.6.01.patch, 
> YARN-6056-branch-2.6.1.patch
>
>
> As part of YARN-2902 the clean up of the local directories was changed to 
> ignore non existing directories and proceed with others in the list. This 
> part of the code change was not backported into branch-2.6, backporting just 
> that part now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6056) Yarn NM using LCE shows a failure when trying to delete a non-existing dir

2017-02-08 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858325#comment-15858325
 ] 

Jonathan Hung edited comment on YARN-6056 at 2/8/17 6:06 PM:
-

Hi [~varun_saxena], any chance we can get this committed to branch-2.6? (It 
also looks good to me.)


was (Author: jhung):
Hi [~varun_saxena], any chance we can get this committed to branch-2.6?

> Yarn NM using LCE shows a failure when trying to delete a non-existing dir
> --
>
> Key: YARN-6056
> URL: https://issues.apache.org/jira/browse/YARN-6056
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.6.4
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
> Attachments: YARN-6056-branch-2.6.01.patch, 
> YARN-6056-branch-2.6.1.patch
>
>
> As part of YARN-2902 the clean up of the local directories was changed to 
> ignore non existing directories and proceed with others in the list. This 
> part of the code change was not backported into branch-2.6, backporting just 
> that part now.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6160) Create an agent-less docker-less provider in the native services framework

2017-02-08 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-6160:


 Summary: Create an agent-less docker-less provider in the native 
services framework
 Key: YARN-6160
 URL: https://issues.apache.org/jira/browse/YARN-6160
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi
 Fix For: yarn-native-services


The goal of the agent-less docker-less provider is to be able to use the YARN 
native services framework when Docker is not installed or other methods of app 
resource installation are preferable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5258) Document Use of Docker with LinuxContainerExecutor

2017-02-08 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5258:

Fix Version/s: 3.0.0-alpha3

> Document Use of Docker with LinuxContainerExecutor
> --
>
> Key: YARN-5258
> URL: https://issues.apache.org/jira/browse/YARN-5258
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-easy
> Fix For: 3.0.0-alpha3
>
> Attachments: YARN-5258.001.patch, YARN-5258.002.patch, 
> YARN-5258.003.patch, YARN-5258.004.patch, YARN-5258.005.patch
>
>
> There aren't currently any docs that explain how to configure Docker and all 
> of its various options aside from reading all of the JIRAs.  We need to 
> document the configuration, use, and troubleshooting, along with helpful 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-02-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858276#comment-15858276
 ] 

Varun Saxena commented on YARN-6130:


I have broken compatibility here in AllocateResponse if ATSv2 is enabled 
because I thought sending AppCollectorData structure will be more suitable. 
This though will be an issue during rolling upgrades. Do we have to maintain 
compatibility across 3.0.0 alpha releases? We can avoid this though. Will 
decide on it based on whether this can be done across alpha releases or not.

> [Security] Generate a delegation token for AM when app collector is created 
> and pass it to AM via NM and RM
> ---
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-6130-YARN-5355.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6150) TestContainerManagerSecurity tests for Yarn Server are flakey

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858237#comment-15858237
 ] 

Hadoop QA commented on YARN-6150:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests: 
The patch generated 2 new + 27 unchanged - 11 fixed = 29 total (was 38) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 26s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
|   | hadoop.yarn.server.TestContainerManagerSecurity |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6150 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851652/YARN-6150.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 686305f29a4b 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / eec52e1 |
| Default Java | 1.8.0_121 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/14862/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/14862/arti

[jira] [Commented] (YARN-5501) Container Pooling in YARN

2017-02-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858174#comment-15858174
 ] 

Jason Lowe commented on YARN-5501:
--

Thanks for posting the design document!

I am confused on how this will be used in practice.  To me pre-initialized 
containers means containers that have already started up with application- or 
framework-specific resources localized, processes have been launched using 
those resources, and potentially connections already negotiated to external 
services.  I'm not sure how YARN is supposed to know what mix of local 
resources, users, and configs to use for preinitialized containers that will 
get a good "hit rate" on container requests.  Maybe I'm misunderstanding what 
is really meant by "preinitialized," and some concrete, sample use cases with 
detailed walkthroughs of how they work in practice would really help 
crystallize the goals here.

Reusing containers across different applications is going to create some 
interesting scenarios that don't exist today.  For example, what does a 
container ID for one of these looks like?  How many things today assume that 
all container IDs for an application are essentially prefixed by the 
application ID?  This would violate that assumption, unless we introduce some 
sort of container ID aliasing where we create a "fake" container ID that maps 
to the "real" ID of the reused container.  It would be good to know how we're 
going to treat container IDs and what applications will see when they get one 
of these containers in response to their allocation request.

What happens for preinitialized container failures, both during application 
execution and when idle?  Do we let the application launch its own recovery 
container, etc.

How does resource accounting/scheduling work with these containers?  Are they 
running in a dedicated queue?  Can users go beyond their normal limits by 
getting these containers outside of the app's queue?  Will it look weird when 
the user's queue isn't full yet we don't allow them any more containers because 
they're already using the maximum number of preinitialized containers?

What are the security considerations?  Are preinitialized containers tied to a 
particular user?  How are app-specific credentials conveyed to the 
preinitialized container, and can credentials leak between apps that use the 
same container?

How does the attached container know it's being attached and what to do when 
that occurs?  For example, if the app framework requires an umbilical 
connection back to the AM, how does the preinitialized container know where 
that AM is and when to connect?

bq. As NM advertises the pre-initialized containers to the RM it will be 
assigned some of the container requests.

Is this some separate, new protocol for advertising or is this just simply 
reporting the container is launched just like other container status today?  
The RM already knows it sent the NM a command to launch the container, so it 
seems this is just the NM reporting the state of the container is now launched 
as it does for any other container start request today, but I wasn't sure if 
that is what was meant here.

bq. When the container requests come then the pre-initialized container could 
be resized to match the resource allocation request and get attached to the job.

Could you elaborate on how the resizing works?  How do the processes running 
within the container being resized made aware of the new size constraints?  
Today containers don't communicate with the NM directly, so I'm not sure how 
the preinitialized containers are supposed to know they are suddenly half the 
size or can now leverage more memory than they could before.  Without that 
communication channel it seems like we're either going to kill processes that 
overflowed their lowered memory constraint or we're going to waste cluster 
resources because the processes are still trying to fit within the old memory 
size.

bq. attaching the container might mean launching the container inside the Linux 
CGroup or Windows job object.

I'm confused, I thought the preinitialized container is already launched, but 
this talks about launching it after attach.  Again a concrete use-case 
walkthrough would help clarify what's really being proposed.  If this is 
primarily about reducing localization time instead of process startup after 
localization then there are simpler approaches we can take.


> Container Pooling in YARN
> -
>
> Key: YARN-5501
> URL: https://issues.apache.org/jira/browse/YARN-5501
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: Container Pooling - one pager.pdf
>
>
> This JIRA proposes a method for reducing the container launch latency in 
> YARN. It introduces a

[jira] [Updated] (YARN-6150) TestContainerManagerSecurity tests for Yarn Server are flakey

2017-02-08 Thread Daniel Sturman (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Sturman updated YARN-6150:
-
Attachment: YARN-6150.006.patch

> TestContainerManagerSecurity tests for Yarn Server are flakey
> -
>
> Key: YARN-6150
> URL: https://issues.apache.org/jira/browse/YARN-6150
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Sturman
>Assignee: Daniel Sturman
> Attachments: YARN-6150.001.patch, YARN-6150.002.patch, 
> YARN-6150.003.patch, YARN-6150.004.patch, YARN-6150.005.patch, 
> YARN-6150.006.patch
>
>
> Repeated runs of 
> {{org.apache.hadoop.yarn.server.TestContainerManagedSecurity}} can either 
> pass or fail on repeated runs on the same codebase.  Also, the two runs (one 
> in secure mode, one without security) aren't well labeled in JUnit.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4675) Reorganize TimelineClient and TimelineClientImpl into separate classes for ATSv1.x and ATSv2

2017-02-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4675:
---
Summary: Reorganize TimelineClient and TimelineClientImpl into separate 
classes for ATSv1.x and ATSv2  (was: Reorganize TimeClientImpl into 
TimeClientV1Impl and TimeClientV2Impl)

> Reorganize TimelineClient and TimelineClientImpl into separate classes for 
> ATSv1.x and ATSv2
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675.v2.007.patch, YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4675) Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl

2017-02-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858001#comment-15858001
 ] 

Varun Saxena commented on YARN-4675:


Thanks [~Naganarasimha] for the patch. Looks pretty close. Few minor comments

# In TimelineConnector, connectionRetry should be annotated with 
VisibleForTesting
# Can't we add a serviceInit method in AMRMClient which stores the value of 
timeline service v2 enabled instead of adding an abstract method 
registerTimelineV2Client.
# The changes here need to be reflected in timeline v2 documentation as well. 
You want to do it here or another JIRA? As the change is very small, maybe we 
can do it here itself. Thoughts?
# In YarnClientImpl, we check for YarnConfiguration#timelineServiceV2Enabled 
and this will check for timeline service enabled again. We can use 
YarnConfiguration#getTimelineServiceVersion instead. This can be done at other 
places as well.
{code}
170 if (conf.getBoolean(YarnConfiguration.TIMELINE_SERVICE_ENABLED,
171 YarnConfiguration.DEFAULT_TIMELINE_SERVICE_ENABLED)
172 && !YarnConfiguration.timelineServiceV2Enabled(conf)) {
{code}

By the way can other checkstyle issues be fixed? Few exist from before though 
and are appearing in report just because you renamed the variable.


> Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4675.v2.002.patch, YARN-4675.v2.003.patch, 
> YARN-4675.v2.004.patch, YARN-4675.v2.005.patch, YARN-4675.v2.006.patch, 
> YARN-4675.v2.007.patch, YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6159) In timeline v2 documentation, the code snippet given for creating timeline client has a mistake

2017-02-08 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R reassigned YARN-6159:
---

Assignee: Naganarasimha G R

> In timeline v2 documentation, the code snippet given for creating timeline 
> client has a mistake
> ---
>
> Key: YARN-6159
> URL: https://issues.apache.org/jira/browse/YARN-6159
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Reporter: Varun Saxena
>Assignee: Naganarasimha G R
>Priority: Trivial
>
> In TimelineServiceV2.md, under section Publishing application specific data, 
> we have the following code snippet. Here, 
> {{timelineClient.putEntitiesAsync(entity);}} should be 
> {{client.putEntitiesAsync(entity);}} instead.
> {code}
> // Create and start the Timeline client v.2
> TimelineClient client = TimelineClient.createTimelineClient(appId);
> client.init(conf);
> client.start();
> try {
>   TimelineEntity myEntity = new TimelineEntity();
>   myEntity.setEntityType("MY_APPLICATION");
>   myEntity.setEntityId("MyApp1")
>   // Compose other entity info
>   // Blocking write
>   client.putEntities(entity);
>   TimelineEntity myEntity2 = new TimelineEntity();
>   // Compose other info
>   // Non-blocking write
>   timelineClient.putEntitiesAsync(entity);
> } catch (IOException e) {
>   // Handle the exception
> } catch (RuntimeException e) {
> {code}
> Below can also be changed to client to keep it consistent.
> {code}
> amRMClient.registerTimelineClient(timelineClient);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6113) re-direct NM Web Service to get container logs for finished applications

2017-02-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857957#comment-15857957
 ] 

Junping Du commented on YARN-6113:
--

Patch looks good in over all. Just a minor one, logic within 
removeSpecifiedQueryParameter() sounds like very similar to 
WebAppUtils.getURLEncodedQueryString() - the only different is to remove 
specific parameter. Can we consolidate two methods in some way to remove 
duplicated code? 
Other looks fine to me.

> re-direct NM Web Service to get container logs for finished applications
> 
>
> Key: YARN-6113
> URL: https://issues.apache.org/jira/browse/YARN-6113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-6113.branch-2.v1.patch, 
> YARN-6113.branch-2.v2.patch, YARN-6113.branch-2.v3.patch, 
> YARN-6113.trunk.v2.patch, YARN-6113.trunk.v3.patch
>
>
> In NM web ui, when we try to get container logs for finished application, it 
> would redirect to the log server based on the configuration: 
> yarn.log.server.url. We should do the similar thing for NM WebService



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6159) In timeline v2 documentation, the code snippet given for creating timeline client has a mistake

2017-02-08 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-6159:
--

 Summary: In timeline v2 documentation, the code snippet given for 
creating timeline client has a mistake
 Key: YARN-6159
 URL: https://issues.apache.org/jira/browse/YARN-6159
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Reporter: Varun Saxena
Priority: Trivial


In TimelineServiceV2.md, under section Publishing application specific data, we 
have the following code snippet. Here, 
{{timelineClient.putEntitiesAsync(entity);}} should be 
{{client.putEntitiesAsync(entity);}} instead.
{code}
// Create and start the Timeline client v.2
TimelineClient client = TimelineClient.createTimelineClient(appId);
client.init(conf);
client.start();

try {
  TimelineEntity myEntity = new TimelineEntity();
  myEntity.setEntityType("MY_APPLICATION");
  myEntity.setEntityId("MyApp1")
  // Compose other entity info

  // Blocking write
  client.putEntities(entity);

  TimelineEntity myEntity2 = new TimelineEntity();
  // Compose other info

  // Non-blocking write
  timelineClient.putEntitiesAsync(entity);

} catch (IOException e) {
  // Handle the exception
} catch (RuntimeException e) {
{code}
Below can also be changed to client to keep it consistent.
{code}
amRMClient.registerTimelineClient(timelineClient);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5994) TestCapacityScheduler.testAMLimitUsage fails intermittently

2017-02-08 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857866#comment-15857866
 ] 

Sunil G edited comment on YARN-5994 at 2/8/17 11:51 AM:


Thanks [~ebadger]

Ideally this make sense. Since this test is only caring about application 
activation, we dont need an NM to be registered. However for internal cluster 
resource and other data  structures, we need some resource in cluster. 

For eg: *scheduler.getMaximumResourceCapability()* is calculated as
{{Math.min(configuredMaxAllocation.getMemorySize(), maxNodeMemory)}}, we 
consider 8GB (max-allocation-mb) or max-memory across all nodes. 
Here max NM memory is 2GB. Hence {{scheduler.getMaximumResourceCapability()}} 
will become 2GB. This will cause 
*RMAppManager#validateAndCreateResourceRequest* to fail the app as AM resource 
requests is for 3GB which is greater than 
{{scheduler.getMaximumResourceCapability()}}.

So bringing NM memory to 4G is fine. [~bibinchundatt], any comments on this?


was (Author: sunilg):
Thanks [~ebadger]

Ideally this make sense. Since this test is only caring about application 
activation, we dont need an NM to be registered. However for internal cluster 
resource and other data  structures, we need some resource in cluster. For eg:
{{Math.min(configuredMaxAllocation.getMemorySize(), maxNodeMemory)}}, we 
consider 8GB (max-allocation-mb) or max-memory across all nodes. 

So bringing NM memory to 4G is fine. [~bibinchundatt], any comments on this?

> TestCapacityScheduler.testAMLimitUsage fails intermittently
> ---
>
> Key: YARN-5994
> URL: https://issues.apache.org/jira/browse/YARN-5994
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler-output.txt,
>  YARN-5994.001.patch
>
>
> {noformat}
> java.lang.AssertionError: app shouldn't be null
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:169)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.submitApp(MockRM.java:577)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.submitApp(MockRM.java:488)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.submitApp(MockRM.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.verifyAMLimitForLeafQueue(TestCapacityScheduler.java:3389)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testAMLimitUsage(TestCapacityScheduler.java:3251)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6031) Application recovery has failed when node label feature is turned off during RM recovery

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857871#comment-15857871
 ] 

Hadoop QA commented on YARN-6031:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 51s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | YARN-6031 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851578/YARN-6031-branch-2.8.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1d714a427b8a 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 20

[jira] [Commented] (YARN-5994) TestCapacityScheduler.testAMLimitUsage fails intermittently

2017-02-08 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857866#comment-15857866
 ] 

Sunil G commented on YARN-5994:
---

Thanks [~ebadger]

Ideally this make sense. Since this test is only caring about application 
activation, we dont need an NM to be registered. However for internal cluster 
resource and other data  structures, we need some resource in cluster. For eg:
{{Math.min(configuredMaxAllocation.getMemorySize(), maxNodeMemory)}}, we 
consider 8GB (max-allocation-mb) or max-memory across all nodes. 

So bringing NM memory to 4G is fine. [~bibinchundatt], any comments on this?

> TestCapacityScheduler.testAMLimitUsage fails intermittently
> ---
>
> Key: YARN-5994
> URL: https://issues.apache.org/jira/browse/YARN-5994
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler-output.txt,
>  YARN-5994.001.patch
>
>
> {noformat}
> java.lang.AssertionError: app shouldn't be null
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertNotNull(Assert.java:621)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:169)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.submitApp(MockRM.java:577)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.submitApp(MockRM.java:488)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.submitApp(MockRM.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.verifyAMLimitForLeafQueue(TestCapacityScheduler.java:3389)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler.testAMLimitUsage(TestCapacityScheduler.java:3251)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6145) Improve log message on fail over

2017-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857859#comment-15857859
 ] 

Hudson commented on YARN-6145:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11222 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11222/])
YARN-6145. Improve log message on fail over. Contributed by Jian He. 
(junping_du: rev eec52e158b7bc14b2d3d53512323ba05e15e09e3)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RequestHedgingRMFailoverProxyProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java


> Improve log message on fail over
> 
>
> Key: YARN-6145
> URL: https://issues.apache.org/jira/browse/YARN-6145
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6145.1.patch
>
>
> On failover, a series of exception stack shown in the log, which is harmless, 
> but confusing to user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6145) Improve log message on fail over

2017-02-08 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6145:
-
Fix Version/s: 3.0.0-alpha3

> Improve log message on fail over
> 
>
> Key: YARN-6145
> URL: https://issues.apache.org/jira/browse/YARN-6145
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: YARN-6145.1.patch
>
>
> On failover, a series of exception stack shown in the log, which is harmless, 
> but confusing to user.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6027) Improve /flows API for more flexible filters fromid, collapse, userid

2017-02-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857803#comment-15857803
 ] 

Varun Saxena commented on YARN-6027:


bq.  But it is expected to collapse with date range
Ok, then its fine. I was thinking we would try to display all the flows (say 10 
on each page) on UI. If it is based on daterange then it should be fine in 
terms of performance.
I guess we will probably display flows for the current day only.
We can probably leave a note in javadoc that a suitable daterange should be 
provided in general for this REST endpoint.

bq.  User can directly provide flow entity ID as fromId.
Ohh you are providing ID itself. Maybe we would like to leave a note in javadoc 
and documentation that cluster part of it will be ignored. And in case of 
collapse, cluster and timestamp will be ignored. In UI case, cluster would be 
same as the one in REST endpoint but you can form fromID manually as well and 
provide a different cluster ID than the one in REST URL path param in that 
case. So we can make the behavior clear.

bq. If need to parse the errors, then why flow entity id is providing full row 
key as id? I think need to change flow entity id format itself.
That is just for read. We do not make any decisions with it. But now we will. 
We can encode or escape cluster and other stuff while creating ID in 
FlowActivityEntity itself but when UI displays it, it may have to unescape it. 
Also we would need to unescape it after splitting fromId. Changing format wont 
make much difference as some delimiter or the other will have to be used and 
that will have to be escaped too. Right? Cluster ID is a plain string and we 
have to assume it can be anything. This would have to be done just to make the 
system more robust even if we are unlikely to have a certain delimiter in 
cluster or elsewhere.

bq. One optimization I can do is PageFilter can be applied in non-collapse mode
Yeah that can be done.

bq. If you look at the patch, I have removed PageFilter while scanning which 
gives all the data. 
Ok...Cant we apply PageFilter in steps in collapse mode? Maybe override 
getResults itself. When we use it with daterange it should be fine but in cases 
where daterange is not specified, this may help. What I mean is get results 
from backend with PageFilter equivalent to limit. Then collapse and go back and 
fetch results again if more records are required(based on limit). Something 
like below. We need to check with however, if PageFilter, with limited but 
possible multiple fetches will be better or getting all the data. I suspect 
former may be better especially when size of table grows. Not a 100% sure 
though. 
{code}
int tmp=0;
while(tmp <= limit)
   get results with PageFilter= limit
   collapse records
   tmp=tmp + number of collpased flow entities in this iteration.
end while
{code}

Additionally, a few other comments.
# In TimelineEntityFilters class javadoc, we should document collapse.
# In javadoc for fromId you mention "The fromId values should be same as fromId 
info field in flow entities. It defines flow entity id.". We do not have a 
fromId field in flow entities. I guess you mean id.
# In TimelineReaderWebServices#getFlows, NumberFormatException can come for 
fromId as well. In handleException we should pass the correct message for this.

> Improve /flows API for more flexible filters fromid, collapse, userid
> -
>
> Key: YARN-6027
> URL: https://issues.apache.org/jira/browse/YARN-6027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6027-YARN-5355.0001.patch
>
>
> In YARN-5585 , fromId is supported for retrieving entities. We need similar 
> filter for flows/flowRun apps and flow run and flow as well. 
> Along with supporting fromId, this JIRA should also discuss following points
> * Should we throw an exception for entities/entity retrieval if duplicates 
> found?
> * TimelieEntity :
> ** Should equals method also check for idPrefix?
> ** Does idPrefix is part of identifiers?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6031) Application recovery has failed when node label feature is turned off during RM recovery

2017-02-08 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857781#comment-15857781
 ] 

Sunil G commented on YARN-6031:
---

Test case failures are know and not related to the patch for branch-2.8 (jdk 7).

Committing to branch-2.8

> Application recovery has failed when node label feature is turned off during 
> RM recovery
> 
>
> Key: YARN-6031
> URL: https://issues.apache.org/jira/browse/YARN-6031
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Affects Versions: 2.8.0
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-6031.001.patch, YARN-6031.002.patch, 
> YARN-6031.003.patch, YARN-6031.004.patch, YARN-6031.005.patch, 
> YARN-6031.006.patch, YARN-6031.007.patch, YARN-6031-branch-2.8.001.patch
>
>
> Here is the repro steps:
> Enable node label, restart RM, configure CS properly, and run some jobs;
> Disable node label, restart RM, and the following exception thrown:
> {noformat}
> Caused by: 
> org.apache.hadoop.yarn.exceptions.InvalidLabelResourceRequestException: 
> Invalid resource request, node label not enabled but request contains label 
> expression
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:225)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:248)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:394)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:339)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recoverApplication(RMAppManager.java:319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.recover(RMAppManager.java:436)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.recover(ResourceManager.java:1165)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:574)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> ... 10 more
> {noformat}
> During RM restart, application recovery failed due to that application had 
> node label expression specified while node label has been disabled.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6031) Application recovery has failed when node label feature is turned off during RM recovery

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857757#comment-15857757
 ] 

Hadoop QA commented on YARN-6031:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
 |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | YARN-6031 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851556/YARN-6031-branch-2.8.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  c

[jira] [Commented] (YARN-6150) TestContainerManagerSecurity tests for Yarn Server are flakey

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857714#comment-15857714
 ] 

Hadoop QA commented on YARN-6150:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 10s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests: 
The patch generated 2 new + 0 unchanged - 38 fixed = 2 total (was 38) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 10s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6150 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12851568/YARN-6150.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d60b5f3a78e1 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2007e0c |
| Default Java | 1.8.0_121 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/14861/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/14861/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-se

[jira] [Commented] (YARN-6151) FS Preemption doesn't filter out queues which cannot be preempted

2017-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15857612#comment-15857612
 ] 

Hadoop QA commented on YARN-6151:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
36s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
59s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 20 unchanged - 1 fixed = 21 total (was 21) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}192m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
 |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
 |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.