[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517520#comment-15517520
 ] 

Hadoop QA commented on YARN-3139:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 7 new + 241 unchanged - 54 fixed = 248 total (was 295) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 33m 57s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830090/YARN-3139.3.patch |
| JIRA Issue | YARN-3139 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 877262f43ea1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6e849cb |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13200/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13200/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13200/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Improve locks in 

[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518191#comment-15518191
 ] 

Hadoop QA commented on YARN-3139:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 242 unchanged - 54 fixed = 248 total (was 296) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 45s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 33s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830154/YARN-3139.4.patch |
| JIRA Issue | YARN-3139 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0192fd46eb6f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6eb700e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13203/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13203/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13203/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Updated] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-09-23 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5638:

Attachment: YARN-5638-trunk.v3.patch

Upload a new patch to address checkstyle issues. I did not fix the two warnings 
on "method too long" since these are not introduced by this change. Meanwhile, 
the failing unit test appears to be unrelated. Let me kick Jenkins again with 
this patch and see how to proceed. 

> Introduce a collector timestamp to uniquely identify collectors creation 
> order in collector discovery
> -
>
> Key: YARN-5638
> URL: https://issues.apache.org/jira/browse/YARN-5638
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5638-trunk.v1.patch, YARN-5638-trunk.v2.patch, 
> YARN-5638-trunk.v3.patch
>
>
> As discussed in YARN-3359, we need to further identify timeline collectors' 
> creation order to rebuild collector discovery data in the RM. This JIRA 
> proposes to use  to order collectors 
> for each application in the RM. This timestamp can then be used when a 
> standby RM becomes active and rebuild collector discovery data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5664) Fix Yarn documentation to link to correct versions.

2016-09-23 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517905#comment-15517905
 ] 

Yufei Gu commented on YARN-5664:


[~xiaochen], thanks for woking on this. Good catch! The patch looks good to me. 
Hi [~rchiang], wanna take a look?

> Fix Yarn documentation to link to correct versions.
> ---
>
> Key: YARN-5664
> URL: https://issues.apache.org/jira/browse/YARN-5664
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: YARN-5664.01.patch
>
>
> Found out that some links in Yarn's doc is pointing to {{current}}. They 
> should point to the version they're on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3411) [Storage implementation] explore & create the native HBase schema for writes

2016-09-23 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3411:
-
Summary: [Storage implementation] explore & create the native HBase schema 
for writes  (was: [Storage implementation] explore the native HBase write 
schema for storage)

> [Storage implementation] explore & create the native HBase schema for writes
> 
>
> Key: YARN-3411
> URL: https://issues.apache.org/jira/browse/YARN-3411
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
> Fix For: 3.0.0-alpha1
>
> Attachments: ATSv2BackendHBaseSchemaproposal.pdf, 
> YARN-3411-YARN-2928.001.patch, YARN-3411-YARN-2928.002.patch, 
> YARN-3411-YARN-2928.003.patch, YARN-3411-YARN-2928.004.patch, 
> YARN-3411-YARN-2928.005.patch, YARN-3411-YARN-2928.006.patch, 
> YARN-3411-YARN-2928.007.patch, YARN-3411.poc.2.txt, YARN-3411.poc.3.txt, 
> YARN-3411.poc.4.txt, YARN-3411.poc.5.txt, YARN-3411.poc.6.txt, 
> YARN-3411.poc.7.txt, YARN-3411.poc.txt
>
>
> There is work that's in progress to implement the storage based on a Phoenix 
> schema (YARN-3134).
> In parallel, we would like to explore an implementation based on a native 
> HBase schema for the write path. Such a schema does not exclude using 
> Phoenix, especially for reads and offline queries.
> Once we have basic implementations of both options, we could evaluate them in 
> terms of performance, scalability, usability, etc. and make a call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5664) Fix Yarn documentation to link to correct versions.

2016-09-23 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518231#comment-15518231
 ] 

Naganarasimha G R commented on YARN-5664:
-

Thanks for working on it [~xiaochen],
+1, Simple fix will commit it shortly !

> Fix Yarn documentation to link to correct versions.
> ---
>
> Key: YARN-5664
> URL: https://issues.apache.org/jira/browse/YARN-5664
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: YARN-5664.01.patch
>
>
> Found out that some links in Yarn's doc is pointing to {{current}}. They 
> should point to the version they're on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5664) Fix Yarn documentation to link to correct versions.

2016-09-23 Thread Xiao Chen (JIRA)
Xiao Chen created YARN-5664:
---

 Summary: Fix Yarn documentation to link to correct versions.
 Key: YARN-5664
 URL: https://issues.apache.org/jira/browse/YARN-5664
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Xiao Chen
Assignee: Xiao Chen


Found out that some links in Yarn's doc is pointing to {{current}}. They should 
point to the version they're on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-23 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5662:
--
Attachment: YARN-5662.3.patch

[~varun_saxena],  Thank you a lot for the comments !
Addressed them. 

> Provide an option to enable ContainerMonitor 
> -
>
> Key: YARN-5662
> URL: https://issues.apache.org/jira/browse/YARN-5662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5662.1.patch, YARN-5662.2.patch, YARN-5662.3.patch
>
>
> Currently, if vmem/pmem check is not enabled, ContainerMonitor would not run. 
>  In certain cases, ContainerMonitor also needs to run to monitor things like 
> container-metrics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-23 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518181#comment-15518181
 ] 

Arun Suresh commented on YARN-5609:
---

So, the container.setIsReInitializing(false) statements were remanants of when 
we did not have the REINITIALIZING state.. We don't really need it anymore.

bq.  if we add both metrics.reInitingContainer and 
metrics.endReInitingContainer into the setIsReInitializing method on top of 
original, that may work..
The setReinitializing(true) was meant to deal with the race condition between 
the ContainerManager API and the Container. It technically does not signify the 
start of a reinitialization.. that is actually triggered either in the 
{{ReInitializationTransition}} or in the {{RetryFailureTransition}} before 
rollback... 
That said, im not very particular about it... I can update the patch coincide 
the metric with the setReInit call from the API.

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch, YARN-5609.007.patch, YARN-5609.008.patch, 
> YARN-5609.009.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-23 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518181#comment-15518181
 ] 

Arun Suresh edited comment on YARN-5609 at 9/24/16 2:34 AM:


So, the container.setIsReInitializing(false) statements were remanants of when 
we did not have the REINITIALIZING state.. We don't really need it anymore.

bq.  if we add both metrics.reInitingContainer and 
metrics.endReInitingContainer into the setIsReInitializing method on top of 
original, that may work..
The setReinitializing(true) was meant to deal with the race condition between 
the ContainerManager API and the Container. It technically does not signify the 
start of a reinitialization.. that is actually triggered either in the 
{{ReInitializationTransition}} or in the {{RetryFailureTransition}} before 
rollback... 
That said, im not very particular about it... I can update the patch coincide 
the metric with the setReInit call from the API... but to be honest, I prefer 
it the way it is in the last patch.


was (Author: asuresh):
So, the container.setIsReInitializing(false) statements were remanants of when 
we did not have the REINITIALIZING state.. We don't really need it anymore.

bq.  if we add both metrics.reInitingContainer and 
metrics.endReInitingContainer into the setIsReInitializing method on top of 
original, that may work..
The setReinitializing(true) was meant to deal with the race condition between 
the ContainerManager API and the Container. It technically does not signify the 
start of a reinitialization.. that is actually triggered either in the 
{{ReInitializationTransition}} or in the {{RetryFailureTransition}} before 
rollback... 
That said, im not very particular about it... I can update the patch coincide 
the metric with the setReInit call from the API.

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch, YARN-5609.007.patch, YARN-5609.008.patch, 
> YARN-5609.009.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-23 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517555#comment-15517555
 ] 

Arun Suresh commented on YARN-3139:
---

Thanks for addressing the comments [~leftnoteasy]
w.r.t to the checkstyles, the unused imports can be fixed.. and maybe use 
changes in HADOOP-13411 to suppress the rest of the warnings.
+1 pending the above..

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch, 
> YARN-3139.3.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5525) Make log aggregation service class configurable

2016-09-23 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517594#comment-15517594
 ] 

Botong Huang edited comment on YARN-5525 at 9/23/16 9:16 PM:
-

Thank [~subru] for the review and comments. Let me provide some context why we 
are doing this. By changing {{AppLogAggregator}} we can only customize the per 
app log aggregation behavior. One reason that we also want to customize 
{{LogAggregationService}} is to allow a different root log directory to upload 
logs to per app, whereas in the current implementation, logs in all apps are 
uploaded to a same (configurable) location in 
{{LogAggregationService.remoteRootLogDir}}. In general, if we want to make log 
aggregation pluggable, I think {{LogAggregationService}} is the right place 
because it is the entry service class. 

Besides, I have a question regarding all the {{private}} to {{protected}} 
changes, where I am sure what is the right thing to do. In general, making log 
aggregation service class configurable will suffice and people can plug in any 
other implementation they need, without needing to modify 
{{LogAggregationService}} and {{AppLogAggregator}} at all. However, in our 
case, we only need to customize some features out of the current 
implementation. Rather copy all the code and modify on top, I implemented mine 
extending the current ones, so that lots of nice feature are inherited. The 
code is much less and easier to maintain, by simply override the methods that 
we need to customize. This is where the {{private}} to {{protected}} changes 
are needed so that the member variables and methods are visible to the 
subclasses. 


was (Author: botong):
Thank [~subru] for the review and comments. Let me provide some context why we 
are doing this. By changing {{AppLogAggregator}} we can only customize the per 
app log aggregation behavior. One reason that we also want to customize 
{{LogAggregationService}} is to allow a different root log directory to upload 
logs to per app, whereas in the current implementation, logs in every app are 
uploaded to a same (configurable) location in 
{{LogAggregationService.remoteRootLogDir}}. In general, if we want to make log 
aggregation pluggable, I think {{LogAggregationService}} is the right place 
because it is the entry service class. 

Besides, I have a question regarding all the {{private}} to {{protected}} 
changes, where I am sure what is the right thing to do. In general, making log 
aggregation service class configurable will suffice and people can plug in any 
other implementation they need, without needing to modify 
{{LogAggregationService}} and {{AppLogAggregator}} at all. However, in our 
case, we only need to customize some features out of the current 
implementation. Rather copy all the code and modify on top, I implemented mine 
extending the current ones, so that lots of nice feature are inherited. The 
code is much less and easier to maintain, by simply override the methods that 
we need to customize. This is where the {{private}} to {{protected}} changes 
are needed so that the member variables and methods are visible to the 
subclasses. 

> Make log aggregation service class configurable
> ---
>
> Key: YARN-5525
> URL: https://issues.apache.org/jira/browse/YARN-5525
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Reporter: Giovanni Matteo Fumarola
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-5525.v1.patch, YARN-5525.v2.patch, 
> YARN-5525.v3.patch
>
>
> Make the log aggregation class configurable and extensible, so that 
> alternative log aggregation behaviors like app specific log aggregation 
> directory, log aggregation format can be implemented and plugged in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518166#comment-15518166
 ] 

Jian He commented on YARN-5609:
---

I saw that the latest patch removed a couple occurrence of 
container.setIsReInitializing(false)...  Shouldn't it be the other way around 
that endReInitingContainer need to be added where 
container.setIsReInitializing(false) was called ? Otherwise, the 
reInitingContainer counter is not reset back to zero if container failed to 
reInit or happen to succeed before reInit.

Basically, if we add both metrics.reInitingContainer and 
metrics.endReInitingContainer into the setIsReInitializing method on top of 
original, that may work...

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch, YARN-5609.007.patch, YARN-5609.008.patch, 
> YARN-5609.009.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5664) Fix Yarn documentation to link to correct versions.

2016-09-23 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated YARN-5664:

Priority: Minor  (was: Major)

> Fix Yarn documentation to link to correct versions.
> ---
>
> Key: YARN-5664
> URL: https://issues.apache.org/jira/browse/YARN-5664
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: YARN-5664.01.patch
>
>
> Found out that some links in Yarn's doc is pointing to {{current}}. They 
> should point to the version they're on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5664) Fix Yarn documentation to link to correct versions.

2016-09-23 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated YARN-5664:

Attachment: YARN-5664.01.patch

Also some minor typos:
CapacityScheduler.md: near peak -> near-peak
YARN.md: based the resource requirements -> based on the resource requirements

Others are link fixes. Compiled locally and verified the links to work.

> Fix Yarn documentation to link to correct versions.
> ---
>
> Key: YARN-5664
> URL: https://issues.apache.org/jira/browse/YARN-5664
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: YARN-5664.01.patch
>
>
> Found out that some links in Yarn's doc is pointing to {{current}}. They 
> should point to the version they're on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-23 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517864#comment-15517864
 ] 

Sangjin Lee commented on YARN-5585:
---

To be clear, in case of things like tez vertex id, it should be easy to form a 
number that is in the right order instead of the created time, using things 
like its sequence id. The issue was that the whole id was being treated as a 
string which may not reflect the right numeric order.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-23 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3139:
-
Attachment: YARN-3139.4.patch

Thanks [~asuresh], updated ver.4 patch, only removed unused imports. 

There're too many similar checkstyle warnings in scheduler, I would prefer to 
keep them as-is. Adding suppress warning statement maybe not good for 
readability to me. As they're all trivial warnings too me.

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch, 
> YARN-3139.3.patch, YARN-3139.4.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5664) Fix Yarn documentation to link to correct versions.

2016-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518069#comment-15518069
 ] 

Hadoop QA commented on YARN-5664:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 30s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830126/YARN-5664.01.patch |
| JIRA Issue | YARN-5664 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux a7cc06a4ddf6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6eb700e |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13202/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fix Yarn documentation to link to correct versions.
> ---
>
> Key: YARN-5664
> URL: https://issues.apache.org/jira/browse/YARN-5664
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: YARN-5664.01.patch
>
>
> Found out that some links in Yarn's doc is pointing to {{current}}. They 
> should point to the version they're on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5665) Documentation does not mention package name requirement for yarn.resourcemanager.scheduler.class

2016-09-23 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-5665:


 Summary: Documentation does not mention package name requirement 
for yarn.resourcemanager.scheduler.class
 Key: YARN-5665
 URL: https://issues.apache.org/jira/browse/YARN-5665
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0-alpha1
Reporter: Miklos Szegedi
Priority: Trivial


http://hadoop.apache.org/docs/r3.0.0-alpha1/hadoop-project-dist/hadoop-common/ClusterSetup.html
 refers to FairScheduler, when it documents the setting 
yarn.resourcemanager.scheduler.class. What it forgets to mention is that the 
user has to specify the full class path like 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler 
otherwise the system throws java.lang.ClassNotFoundException: FairScheduler. It 
would be nice, if the documentation specified the full class path, so that the 
user does not need to look it up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5525) Make log aggregation service class configurable

2016-09-23 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517594#comment-15517594
 ] 

Botong Huang commented on YARN-5525:


Thank [~subru] for the review and comments. Let me provide some context why we 
are doing this. By changing {{AppLogAggregator}} we can only customize the per 
app log aggregation behavior. One reason that we also want to customize 
{{LogAggregationService}} is to allow a different root log directory to upload 
logs to per app, whereas in the current implementation, logs in every app are 
uploaded to a same (configurable) location in 
{{LogAggregationService.remoteRootLogDir}}. In general, if we want to make log 
aggregation pluggable, I think {{LogAggregationService}} is the right place 
because it is the entry service class. 

Besides, I have a question regarding all the {{private}} to {{protected}} 
changes, where I am sure what is the right thing to do. In general, making log 
aggregation service class configurable will suffice and people can plug in any 
other implementation they need, without needing to modify 
{{LogAggregationService}} and {{AppLogAggregator}} at all. However, in our 
case, we only need to customize some features out of the current 
implementation. Rather copy all the code and modify on top, I implemented mine 
extending the current ones, so that lots of nice feature are inherited. The 
code is much less and easier to maintain, by simply override the methods that 
we need to customize. This is where the {{private}} to {{protected}} changes 
are needed so that the member variables and methods are visible to the 
subclasses. 

> Make log aggregation service class configurable
> ---
>
> Key: YARN-5525
> URL: https://issues.apache.org/jira/browse/YARN-5525
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Reporter: Giovanni Matteo Fumarola
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-5525.v1.patch, YARN-5525.v2.patch, 
> YARN-5525.v3.patch
>
>
> Make the log aggregation class configurable and extensible, so that 
> alternative log aggregation behaviors like app specific log aggregation 
> directory, log aggregation format can be implemented and plugged in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-09-23 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517898#comment-15517898
 ] 

Sangjin Lee commented on YARN-5561:
---

Sorry it took me a while to get to this.

I'm generally fine with the newly proposed REST end points (as they seem to be 
more like short-handed API for existing ones).

Regarding the need for listing all apps (in a cluster), I agree with others 
that it would be quite problematic. If I read this thread correctly, Rohith, I 
think you're saying you are OK without having that API since we have a query 
that returns all apps for a flow (and one that returns all flow activities).

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-5561.02.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5638) Introduce a collector timestamp to uniquely identify collectors creation order in collector discovery

2016-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518101#comment-15518101
 ] 

Hadoop QA commented on YARN-5638:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 2 new + 386 unchanged - 10 fixed = 388 total (was 396) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 31s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 30s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830128/YARN-5638-trunk.v3.patch
 |
| JIRA Issue | YARN-5638 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 5d2fdfcf5243 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6eb700e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5659) getPathFromYarnURL should use standard methods

2016-09-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517984#comment-15517984
 ] 

Sergey Shelukhin commented on YARN-5659:


[~templedf] does the patch make sense now? only the whitespace was changed 
since the last iteration. 
[~hitesh] fyi this one is ready

> getPathFromYarnURL should use standard methods
> --
>
> Key: YARN-5659
> URL: https://issues.apache.org/jira/browse/YARN-5659
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: YARN-5659.01.patch, YARN-5659.02.patch, 
> YARN-5659.03.patch, YARN-5659.04.patch, YARN-5659.04.patch, YARN-5659.patch
>
>
> getPathFromYarnURL does some string shenanigans where  standard ctors should 
> suffice.
> There are also bugs in it e.g. passing an empty scheme to the URI ctor is 
> invalid, null should be used. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518321#comment-15518321
 ] 

Hadoop QA commented on YARN-5662:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 0s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 251 unchanged - 5 fixed = 251 total (was 256) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 50s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 33s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830162/YARN-5662.3.patch |
| JIRA Issue | YARN-5662 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 6df18877beda 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6eb700e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518350#comment-15518350
 ] 

Jian He commented on YARN-5609:
---

I see, I'm ok to not use the same method. The main thing I'm referring is that 
the containersReIniting counter may not be decremented if the container failed 
to reInit or succeed on reInit ? It only decrements when the container 
successfully reInited. This will cause the  containersReIniting stay at a 
positive value forever, right ?

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch, YARN-5609.007.patch, YARN-5609.008.patch, 
> YARN-5609.009.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-23 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518409#comment-15518409
 ] 

Arun Suresh commented on YARN-5609:
---

Hmm.. so how about I put back the old {{setIsReinitializing(false)}} but then 
decrement the metric only if the state has changed from true to false ?

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch, YARN-5609.007.patch, YARN-5609.008.patch, 
> YARN-5609.009.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518418#comment-15518418
 ] 

Hadoop QA commented on YARN-5609:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} root: The patch generated 0 new + 491 unchanged - 
17 fixed = 491 total (was 508) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 44s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 18s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 45s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 114m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830018/YARN-5609.009.patch |
| JIRA Issue | YARN-5609 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux b14f3448bff1 3.13.0-36-lowlatency 

[jira] [Commented] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518419#comment-15518419
 ] 

Hadoop QA commented on YARN-5662:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 56s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 251 unchanged - 5 fixed = 251 total (was 256) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 21s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 55s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12830162/YARN-5662.3.patch |
| JIRA Issue | YARN-5662 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 473fe2149e04 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6eb700e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13206/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 

[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-09-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518416#comment-15518416
 ] 

Wangda Tan commented on YARN-5145:
--

Thanks [~cheersyang], for the REST API enhancement, it will be very helpful for 
retrieving configuration values!

[~Sreenath] thanks for your suggestions, I'm still trying to understand how it 
works. For now, I think the most important use case for the new UI is: 
bq. 1. Production - UI hosted in RM
I can understand the UI can get configs from REST API. But how the UI knows 
which is the address to RM REST endpoint? If user update the default web port 
from 8088 to other port, or on a multi-homing environment, RM hosts on a 
different hostname, how the UI talks to RM?
Is there any approach in JS/ember that we can pass some environment variables 
while hosting the UI code? 

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: YARN-5145-YARN-3368.01.patch
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5664) Fix Yarn documentation to link to correct versions.

2016-09-23 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518466#comment-15518466
 ] 

Xiao Chen commented on YARN-5664:
-

Thanks [~yufeigu] and [~Naganarasimha] for reviews and (coming) commit!

> Fix Yarn documentation to link to correct versions.
> ---
>
> Key: YARN-5664
> URL: https://issues.apache.org/jira/browse/YARN-5664
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: YARN-5664.01.patch
>
>
> Found out that some links in Yarn's doc is pointing to {{current}}. They 
> should point to the version they're on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5539) TimelineClient failed to retry on "java.net.SocketTimeoutException: Read timed out"

2016-09-23 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515549#comment-15515549
 ] 

Varun Saxena commented on YARN-5539:


+1
Will commit it shortly.

> TimelineClient failed to retry on "java.net.SocketTimeoutException: Read 
> timed out"
> ---
>
> Key: YARN-5539
> URL: https://issues.apache.org/jira/browse/YARN-5539
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Attachments: YARN-5539.patch
>
>
> AM fails with the following exception
> {code}
> FATAL distributedshell.ApplicationMaster: Error running ApplicationMaster
> com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:236)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:185)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:247)
>   at com.sun.jersey.api.client.Client.handle(Client.java:648)
>   at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
>   at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
>   at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPostingObject(TimelineWriter.java:154)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:115)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:112)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPosting(TimelineWriter.java:112)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.putEntities(TimelineWriter.java:92)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:345)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishApplicationAttemptEvent(ApplicationMaster.java:1166)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.run(ApplicationMaster.java:567)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.main(ApplicationMaster.java:298)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:170)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
>   at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:253)
>   at 
> org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineURLConnectionFactory.getHttpURLConnection(TimelineClientImpl.java:472)
>   at 
> 

[jira] [Commented] (YARN-5610) Initial code for native services REST API

2016-09-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515597#comment-15515597
 ] 

Jian He commented on YARN-5610:
---

bq. The reason these attributes are at the Application level as well is because 
there will be simple applications which will not have any components.
ok, sounds good to me.

Any update on the 2nd set of comments ?

> Initial code for native services REST API
> -
>
> Key: YARN-5610
> URL: https://issues.apache.org/jira/browse/YARN-5610
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
> Attachments: YARN-4793-yarn-native-services.001.patch, 
> YARN-5610-yarn-native-services.002.patch
>
>
> This task will be used to submit and review patches for the initial code drop 
> for the native services REST API 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5539) TimelineClient failed to retry on "java.net.SocketTimeoutException: Read timed out"

2016-09-23 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5539:
---
Hadoop Flags: Reviewed

> TimelineClient failed to retry on "java.net.SocketTimeoutException: Read 
> timed out"
> ---
>
> Key: YARN-5539
> URL: https://issues.apache.org/jira/browse/YARN-5539
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5539.patch
>
>
> AM fails with the following exception
> {code}
> FATAL distributedshell.ApplicationMaster: Error running ApplicationMaster
> com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:236)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:185)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:247)
>   at com.sun.jersey.api.client.Client.handle(Client.java:648)
>   at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
>   at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
>   at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPostingObject(TimelineWriter.java:154)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:115)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:112)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPosting(TimelineWriter.java:112)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.putEntities(TimelineWriter.java:92)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:345)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishApplicationAttemptEvent(ApplicationMaster.java:1166)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.run(ApplicationMaster.java:567)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.main(ApplicationMaster.java:298)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:170)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
>   at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:253)
>   at 
> org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineURLConnectionFactory.getHttpURLConnection(TimelineClientImpl.java:472)
>   at 
> 

[jira] [Updated] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-23 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3142:
---
Attachment: YARN-3142.03.patch

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: YARN-3142.01.patch, YARN-3142.02.patch, 
> YARN-3142.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5539) TimelineClient failed to retry on "java.net.SocketTimeoutException: Read timed out"

2016-09-23 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515751#comment-15515751
 ] 

Varun Saxena commented on YARN-5539:


Committed to trunk, branch-2 and branch-2.8
Thanks [~djp] for your contribution.

> TimelineClient failed to retry on "java.net.SocketTimeoutException: Read 
> timed out"
> ---
>
> Key: YARN-5539
> URL: https://issues.apache.org/jira/browse/YARN-5539
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5539.patch
>
>
> AM fails with the following exception
> {code}
> FATAL distributedshell.ApplicationMaster: Error running ApplicationMaster
> com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:236)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:185)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:247)
>   at com.sun.jersey.api.client.Client.handle(Client.java:648)
>   at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
>   at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
>   at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPostingObject(TimelineWriter.java:154)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:115)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:112)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPosting(TimelineWriter.java:112)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.putEntities(TimelineWriter.java:92)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:345)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishApplicationAttemptEvent(ApplicationMaster.java:1166)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.run(ApplicationMaster.java:567)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.main(ApplicationMaster.java:298)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:170)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
>   at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:253)
>   at 
> org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
>   at 
> 

[jira] [Commented] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515983#comment-15515983
 ] 

Hadoop QA commented on YARN-5662:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch 
generated 0 new + 251 unchanged - 5 fixed = 251 total (was 256) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 16s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 55s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 0s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829992/YARN-5662.2.patch |
| JIRA Issue | YARN-5662 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 805c396dd9ec 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b8a2d7b |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Commented] (YARN-5622) TestYarnCLI.testGetContainers fails due to mismatched date formats

2016-09-23 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516022#comment-15516022
 ] 

Akira Ajisaka commented on YARN-5622:
-

I couldn't reproduce the failure. Does this issue still occur on the latest 
trunk?

> TestYarnCLI.testGetContainers fails due to mismatched date formats
> --
>
> Key: YARN-5622
> URL: https://issues.apache.org/jira/browse/YARN-5622
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5622.001.patch
>
>
> ApplicationCLI.listContainers uses Times.format to print timestamps, while 
> TestYarnCLI.testGetContainers formats them using dateFormat.format with its 
> own defined format. The test should be consistent and use Times.format. 
> {noformat}
> org.junit.ComparisonFailure: expected:<...1234_0005_01_01 [Thu Jan 01 
> 00:00:01 + 1970 Thu Jan 01 00:00:05 + 1970  COMPLETE  
>  host:1234http://host:2345 
> logURL
>  container_1234_0005_01_02Thu Jan 01 00:00:01 + 1970  Thu Jan 
> 01 00:00:05 + 1970  COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03Thu Jan 01 00:00:01 + 1970] 
>  N/...> but was:<...1234_0005_01_01 [ 1-Jan-1970 00:00:01
> 1-Jan-1970 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_02 1-Jan-1970 00:00:01 1-Jan-1970 
> 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03 1-Jan-1970 00:00:01]   
>  N/...>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.client.cli.TestYarnCLI.testGetContainers(TestYarnCLI.java:330)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5269) Bubble exceptions and errors all the way up the calls, including to clients.

2016-09-23 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned YARN-5269:
--

Assignee: Varun Saxena

> Bubble exceptions and errors all the way up the calls, including to clients.
> 
>
> Key: YARN-5269
> URL: https://issues.apache.org/jira/browse/YARN-5269
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
>  Labels: YARN-5355
>
> Currently we ignore (swallow) exception from the HBase side in many cases 
> (reads and writes).
> Also, on the client side, neither TimelineClient#putEntities (the v2 flavor) 
> nor the #putEntitiesAsync method return any value.
> For the second drop we may want to consider how we properly bubble up 
> exceptions throughout the write and reader call paths and if we want to 
> return a response in putEntities and some future kind of result for 
> putEntitiesAsync.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-23 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5609:
--
Attachment: YARN-5609.009.patch

Uploading patch

bq. I think wherever setIsReInitializing(false) is called endReInitingContainer 
should be called. Otherwise it's possible endReInitingContainer is not invoked.
Agreed... fixed this in v009

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch, YARN-5609.007.patch, YARN-5609.008.patch, 
> YARN-5609.009.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5539) TimelineClient failed to retry on "java.net.SocketTimeoutException: Read timed out"

2016-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515944#comment-15515944
 ] 

Hudson commented on YARN-5539:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10478 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10478/])
YARN-5539. TimelineClient failed to retry on (varunsaxena: rev 
b8a2d7b8fc96302ba1ef99d24392f463734f1b82)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java


> TimelineClient failed to retry on "java.net.SocketTimeoutException: Read 
> timed out"
> ---
>
> Key: YARN-5539
> URL: https://issues.apache.org/jira/browse/YARN-5539
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Junping Du
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5539.patch
>
>
> AM fails with the following exception
> {code}
> FATAL distributedshell.ApplicationMaster: Error running ApplicationMaster
> com.sun.jersey.api.client.ClientHandlerException: 
> java.net.SocketTimeoutException: Read timed out
>   at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:236)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:185)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:247)
>   at com.sun.jersey.api.client.Client.handle(Client.java:648)
>   at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
>   at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
>   at 
> com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPostingObject(TimelineWriter.java:154)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:115)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:112)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPosting(TimelineWriter.java:112)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineWriter.putEntities(TimelineWriter.java:92)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:345)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishApplicationAttemptEvent(ApplicationMaster.java:1166)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.run(ApplicationMaster.java:567)
>   at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.main(ApplicationMaster.java:298)
> Caused by: java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:170)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
>   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1536)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1441)
>   at 
> java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:253)
>   at 
> org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>   at 
> 

[jira] [Commented] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-23 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515990#comment-15515990
 ] 

Varun Saxena commented on YARN-3142:


Fixed checkstyle issues

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: YARN-3142.01.patch, YARN-3142.02.patch, 
> YARN-3142.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5400) Light cleanup in ZKRMStateStore

2016-09-23 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515889#comment-15515889
 ] 

Akira Ajisaka commented on YARN-5400:
-

LGTM, +1.

> Light cleanup in ZKRMStateStore
> ---
>
> Key: YARN-5400
> URL: https://issues.apache.org/jira/browse/YARN-5400
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-5400.001.patch, YARN-5400.002.patch
>
>
> of {{ZKRMStateStore}} contains a plethora whitespace issues as well as some 
> icky bits, like unused variables.  This JIRA is to clean that up.  It should 
> have no functional impact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5280) Allow YARN containers to run with Java Security Manager

2016-09-23 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-5280:

Attachment: YARN-5280.001.patch

This latest patch includes the controls discussed in this thread and uses the 
new ContainerRuntime API.
[~rkanter], [~lmccay], [~vinodkv] - any thoughts on next steps?

> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-5280.001.patch, YARN-5280.patch, 
> YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-09-23 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516508#comment-15516508
 ] 

Jian He commented on YARN-5611:
---

- I thought the lifetime passed in the updateLifetime API is an absolute value, 
looks like it's extra ?  should it be absolute ? so that shorten it is also 
possible in future.
- For below code , appTimeouts.setLifetime will throw NPE if appTimeout is null.
{code}
ApplicationTimeouts appTimeouts =
application.getApplicationSubmissionContext().getApplicationTimeouts();
if (appTimeouts != null) {
  oldLifetime =
  appTimeouts.getLifetime() > 0 ? appTimeouts.getLifetime() : 0;
}
appTimeouts.setLifetime(oldLifetime + newLifetime);
{code}
- The updateApplicationTimeout will store the updated 
applicationSubmissionContext, it is possible that this happens even before the 
original appSubmissionContext is persisted when app is submitted. Then the 
updated timeout will be overwritten. 

- similarly for below code, the newLifetime could be overwritten by the 
original timeout in appSubmissionContext on submission.  then we lost the 
updated timeout. 
{code}
  // If application is not monitored earlier then start monitoring from now
  register(appId);
  monitoredApps.put(appId, newLifetime);
{code}
so should we allow update only when app is running or accepted?

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5611.patch, YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5280) Allow YARN containers to run with Java Security Manager

2016-09-23 Thread Greg Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Phillips updated YARN-5280:

Attachment: (was: YARN-5280.005.patch)

> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Assignee: Greg Phillips
>Priority: Minor
> Attachments: YARN-5280.patch, YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5663) Possible resource leak

2016-09-23 Thread Oleksii Dymytrov (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516720#comment-15516720
 ] 

Oleksii Dymytrov commented on YARN-5663:


Hi, [~templedf]. 
Please, review my new patch _YARN_5663_v1_002_patch_.


> Possible resource leak
> --
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch, 
> YARN_5663_v1_002_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-23 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516704#comment-15516704
 ] 

Arun Suresh commented on YARN-5609:
---

The test case failures are not related to the patch..

> Expose upgrade and restart API in ContainerManagementProtocol
> -
>
> Key: YARN-5609
> URL: https://issues.apache.org/jira/browse/YARN-5609
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5609.001.patch, YARN-5609.002.patch, 
> YARN-5609.003.patch, YARN-5609.004.patch, YARN-5609.005.patch, 
> YARN-5609.006.patch, YARN-5609.007.patch, YARN-5609.008.patch, 
> YARN-5609.009.patch
>
>
> YARN-5620 and YARN-5637 allows an AM to explicitly *upgrade* a container with 
> a new launch context and subsequently *rollback* / *commit* the change on the 
> Container. This can also be used to simply *restart* the Container as well. 
> This JIRA proposes to extend the ContainerManagementProtocol with the 
> following API:
> * *reInitializeContainer*
> * *rollbackLastUpgrade*
> * *commitLastUpgrade*
> * *restartContainer*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5609) Expose upgrade and restart API in ContainerManagementProtocol

2016-09-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516638#comment-15516638
 ] 

Hadoop QA commented on YARN-5609:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
33s {color} | {color:green} root: The patch generated 0 new + 491 unchanged - 
17 fixed = 491 total (was 508) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 35s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 29s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 44s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 42s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 123m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-23 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516756#comment-15516756
 ] 

Sunil G commented on YARN-2009:
---

Thank you very much [~eepayne] for sharing the thoughts.

bq.The purpose of this line of code is to set tq's unallocated resources. But 
even if tq is below it's guarantee, the amount of resources that intra-queue 
preemption should consider when balancing is not the queue's guarantee, it's 
what the queue is already using. If tq is below its guarantee, inter-queue 
preemption should be handling that.
I also thought about this in similar lines. But there was one point which i 
thought it may make more sense if we use guaranteed. If queue is under used 
than its capacity, our current calculation may consider more resource for 
idealAssigned per app. This may yield a lesser value toBePreempted per app 
(from lower priority apps). And it may be fine because there some more resource 
which is available in queue for other high priority apps. So we may not need to 
preempt immediately. Does this make sense?

bq.The use case I'm referencing regarding this code is not regarding 2 
different users. It's regarding the same user submitting jobs of different 
priorities. If user1 submits a low priority job that consumes the whole queue, 
user1's headroom will be 0. Then, when user1 submits a second app at a higher 
priority, this code will cause the second app to starve because user1 has 
already used up its allocation.
I think i understood your point. Let me make necessary change.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5663) Possible resource leak

2016-09-23 Thread Oleksii Dymytrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksii Dymytrov updated YARN-5663:
---
Attachment: YARN_5663_v1_002_patch.patch

> Possible resource leak
> --
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch, 
> YARN_5663_v1_002_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5622) TestYarnCLI.testGetContainers fails due to mismatched date formats

2016-09-23 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516793#comment-15516793
 ] 

Eric Badger commented on YARN-5622:
---

[~ajisakaa], it doesn't currently fail in trunk because the dateFormat used in 
this test is currently consistent with the format defined in Times.format(). 
But we aren't testing that the date format is one way or the other, we're just 
checking to make sure that the dates are consistent. It's unnecessary to 
maintain the consistency between this test's hardcoded dateFormat and the one 
defined in Times. So we could mark this as an improvement rather than a bug fix 
as the test is not actually failing at this moment. But the improvement is 
still valid. 

Currently in trunk:

TestYarnCLI.java
{noformat}
303 DateFormat dateFormat=new SimpleDateFormat("EEE MMM dd HH:mm:ss Z 
");
{noformat}

Times.java
{noformat}
 33   static final ThreadLocal dateFormat =
 34   new ThreadLocal() {
 35 @Override protected SimpleDateFormat initialValue() {
 36   return new SimpleDateFormat("EEE MMM dd HH:mm:ss Z ");
 37 }
 38   };
{noformat}

> TestYarnCLI.testGetContainers fails due to mismatched date formats
> --
>
> Key: YARN-5622
> URL: https://issues.apache.org/jira/browse/YARN-5622
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5622.001.patch
>
>
> ApplicationCLI.listContainers uses Times.format to print timestamps, while 
> TestYarnCLI.testGetContainers formats them using dateFormat.format with its 
> own defined format. The test should be consistent and use Times.format. 
> {noformat}
> org.junit.ComparisonFailure: expected:<...1234_0005_01_01 [Thu Jan 01 
> 00:00:01 + 1970 Thu Jan 01 00:00:05 + 1970  COMPLETE  
>  host:1234http://host:2345 
> logURL
>  container_1234_0005_01_02Thu Jan 01 00:00:01 + 1970  Thu Jan 
> 01 00:00:05 + 1970  COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03Thu Jan 01 00:00:01 + 1970] 
>  N/...> but was:<...1234_0005_01_01 [ 1-Jan-1970 00:00:01
> 1-Jan-1970 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_02 1-Jan-1970 00:00:01 1-Jan-1970 
> 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03 1-Jan-1970 00:00:01]   
>  N/...>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.client.cli.TestYarnCLI.testGetContainers(TestYarnCLI.java:330)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5663) Small refactor in ZKRMStateStore

2016-09-23 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516938#comment-15516938
 ] 

Daniel Templeton commented on YARN-5663:


[~ajisakaa] or [~rchiang], wanna take a look?

> Small refactor in ZKRMStateStore
> 
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Assignee: Oleksii Dymytrov
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch, 
> YARN_5663_v1_002_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5622) TestYarnCLI.testGetContainers fails due to mismatched date formats

2016-09-23 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516824#comment-15516824
 ] 

Akira Ajisaka commented on YARN-5622:
-

Thank you for the detailed information. +1 for the improvement.

> TestYarnCLI.testGetContainers fails due to mismatched date formats
> --
>
> Key: YARN-5622
> URL: https://issues.apache.org/jira/browse/YARN-5622
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
> Attachments: YARN-5622.001.patch
>
>
> ApplicationCLI.listContainers uses Times.format to print timestamps, while 
> TestYarnCLI.testGetContainers formats them using dateFormat.format with its 
> own defined format. The test should be consistent and use Times.format. 
> {noformat}
> org.junit.ComparisonFailure: expected:<...1234_0005_01_01 [Thu Jan 01 
> 00:00:01 + 1970 Thu Jan 01 00:00:05 + 1970  COMPLETE  
>  host:1234http://host:2345 
> logURL
>  container_1234_0005_01_02Thu Jan 01 00:00:01 + 1970  Thu Jan 
> 01 00:00:05 + 1970  COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03Thu Jan 01 00:00:01 + 1970] 
>  N/...> but was:<...1234_0005_01_01 [ 1-Jan-1970 00:00:01
> 1-Jan-1970 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_02 1-Jan-1970 00:00:01 1-Jan-1970 
> 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03 1-Jan-1970 00:00:01]   
>  N/...>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.client.cli.TestYarnCLI.testGetContainers(TestYarnCLI.java:330)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-23 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516923#comment-15516923
 ] 

Eric Payne commented on YARN-2009:
--

{quote}
I also thought about this in similar lines. But there was one point which i 
thought it may make more sense if we use guaranteed. If queue is under used 
than its capacity, our current calculation may consider more resource for 
idealAssigned per app. This may yield a lesser value toBePreempted per app 
(from lower priority apps). And it may be fine because there some more resource 
which is available in queue for other high priority apps. So we may not need to 
preempt immediately. Does this make sense?
{quote}
[~sunilg], I'm sorry, but I don't understand. Can you provide a step-by-step 
use case to demonstrate your concern?


> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5622) TestYarnCLI.testGetContainers fails due to mismatched date formats

2016-09-23 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-5622:

Priority: Minor  (was: Major)
Hadoop Flags: Reviewed
  Issue Type: Improvement  (was: Bug)

> TestYarnCLI.testGetContainers fails due to mismatched date formats
> --
>
> Key: YARN-5622
> URL: https://issues.apache.org/jira/browse/YARN-5622
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Minor
> Attachments: YARN-5622.001.patch
>
>
> ApplicationCLI.listContainers uses Times.format to print timestamps, while 
> TestYarnCLI.testGetContainers formats them using dateFormat.format with its 
> own defined format. The test should be consistent and use Times.format. 
> {noformat}
> org.junit.ComparisonFailure: expected:<...1234_0005_01_01 [Thu Jan 01 
> 00:00:01 + 1970 Thu Jan 01 00:00:05 + 1970  COMPLETE  
>  host:1234http://host:2345 
> logURL
>  container_1234_0005_01_02Thu Jan 01 00:00:01 + 1970  Thu Jan 
> 01 00:00:05 + 1970  COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03Thu Jan 01 00:00:01 + 1970] 
>  N/...> but was:<...1234_0005_01_01 [ 1-Jan-1970 00:00:01
> 1-Jan-1970 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_02 1-Jan-1970 00:00:01 1-Jan-1970 
> 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03 1-Jan-1970 00:00:01]   
>  N/...>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.client.cli.TestYarnCLI.testGetContainers(TestYarnCLI.java:330)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5663) Possible resource leak

2016-09-23 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-5663:
--

Assignee: Daniel Templeton

> Possible resource leak
> --
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch, 
> YARN_5663_v1_002_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5663) Possible resource leak

2016-09-23 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516860#comment-15516860
 ] 

Daniel Templeton commented on YARN-5663:


Thanks for the update, [~ameks94].  +1 (non-binding)

> Possible resource leak
> --
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch, 
> YARN_5663_v1_002_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5663) Small refactor in ZKRMStateStore

2016-09-23 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5663:
---
Summary: Small refactor in ZKRMStateStore  (was: Possible resource leak)

> Small refactor in ZKRMStateStore
> 
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch, 
> YARN_5663_v1_002_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5663) Small refactor in ZKRMStateStore

2016-09-23 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15516875#comment-15516875
 ] 

Daniel Templeton commented on YARN-5663:


As soon as I get you added to the contributors list, I'll reassign the issue to 
you.

> Small refactor in ZKRMStateStore
> 
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch, 
> YARN_5663_v1_002_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5663) Small refactor in ZKRMStateStore

2016-09-23 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5663:
---
Assignee: Oleksii Dymytrov  (was: Daniel Templeton)

> Small refactor in ZKRMStateStore
> 
>
> Key: YARN-5663
> URL: https://issues.apache.org/jira/browse/YARN-5663
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Oleksii Dymytrov
>Assignee: Oleksii Dymytrov
>Priority: Minor
> Attachments: YARN_5663_v1_001_patch.patch, 
> YARN_5663_v1_002_patch.patch
>
>
> ByteArrayOutputStream resource will not be freed in case of errors in write 
> method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517015#comment-15517015
 ] 

Karthik Kambatla commented on YARN-4464:


Latest patch looks good, but for one issue: dropping 
DEFAULT_RM_STATE_STORE_MAX_COMPLETED_APPLICATIONS is incompatible and will 
break any code that is accessing that constant. Can we deprecate it instead? 

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch, YARN-4464.005.patch, 
> YARN-4464.006.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5622) TestYarnCLI.testGetContainers fails due to mismatched date formats

2016-09-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517390#comment-15517390
 ] 

Hudson commented on YARN-5622:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10480 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10480/])
YARN-5622. TestYarnCLI.testGetContainers fails due to mismatched date (kihwal: 
rev 6e849cb658438c0561de485e01f3de7df47bf9ad)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java


> TestYarnCLI.testGetContainers fails due to mismatched date formats
> --
>
> Key: YARN-5622
> URL: https://issues.apache.org/jira/browse/YARN-5622
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: YARN-5622.001.patch
>
>
> ApplicationCLI.listContainers uses Times.format to print timestamps, while 
> TestYarnCLI.testGetContainers formats them using dateFormat.format with its 
> own defined format. The test should be consistent and use Times.format. 
> {noformat}
> org.junit.ComparisonFailure: expected:<...1234_0005_01_01 [Thu Jan 01 
> 00:00:01 + 1970 Thu Jan 01 00:00:05 + 1970  COMPLETE  
>  host:1234http://host:2345 
> logURL
>  container_1234_0005_01_02Thu Jan 01 00:00:01 + 1970  Thu Jan 
> 01 00:00:05 + 1970  COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03Thu Jan 01 00:00:01 + 1970] 
>  N/...> but was:<...1234_0005_01_01 [ 1-Jan-1970 00:00:01
> 1-Jan-1970 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_02 1-Jan-1970 00:00:01 1-Jan-1970 
> 00:00:05COMPLETE   host:1234
> http://host:2345 logURL
>  container_1234_0005_01_03 1-Jan-1970 00:00:01]   
>  N/...>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.client.cli.TestYarnCLI.testGetContainers(TestYarnCLI.java:330)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5324) Stateless Federation router policies implementation

2016-09-23 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517402#comment-15517402
 ] 

Carlo Curino commented on YARN-5324:


Thanks [~subru] for reviewing and committing this. 

> Stateless Federation router policies implementation
> ---
>
> Key: YARN-5324
> URL: https://issues.apache.org/jira/browse/YARN-5324
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: YARN-2915
>
> Attachments: YARN-5324-YARN-2915.06.patch, 
> YARN-5324-YARN-2915.07.patch, YARN-5324-YARN-2915.08.patch, 
> YARN-5324-YARN-2915.09.patch, YARN-5324-YARN-2915.10.patch, 
> YARN-5324-YARN-2915.11.patch, YARN-5324-YARN-2915.12.patch, 
> YARN-5324-YARN-2915.13.patch, YARN-5324-YARN-2915.14.patch, 
> YARN-5324-YARN-2915.15.patch, YARN-5324-YARN-2915.16.patch, 
> YARN-5324.01.patch, YARN-5324.02.patch, YARN-5324.03.patch, 
> YARN-5324.04.patch, YARN-5324.05.patch
>
>
> These are policies at the Router that do not require maintaing state across 
> choices (e.g., weighted random).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4767) Network issues can cause persistent RM UI outage

2016-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517025#comment-15517025
 ] 

Karthik Kambatla commented on YARN-4767:


+1. Will commit this later today. /cc [~vinodkv]

> Network issues can cause persistent RM UI outage
> 
>
> Key: YARN-4767
> URL: https://issues.apache.org/jira/browse/YARN-4767
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.7.2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-4767.001.patch, YARN-4767.002.patch, 
> YARN-4767.003.patch, YARN-4767.004.patch, YARN-4767.005.patch, 
> YARN-4767.006.patch, YARN-4767.007.patch, YARN-4767.008.patch, 
> YARN-4767.009.patch, YARN-4767.010.patch
>
>
> If a network issue causes an AM web app to resolve the RM proxy's address to 
> something other than what's listed in the allowed proxies list, the 
> AmIpFilter will 302 redirect the RM proxy's request back to the RM proxy.  
> The RM proxy will then consume all available handler threads connecting to 
> itself over and over, resulting in an outage of the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3142) Improve locks in AppSchedulingInfo

2016-09-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517095#comment-15517095
 ] 

Wangda Tan commented on YARN-3142:
--

+1 to the latest patch, thanks [~varun_saxena].

> Improve locks in AppSchedulingInfo
> --
>
> Key: YARN-3142
> URL: https://issues.apache.org/jira/browse/YARN-3142
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: YARN-3142.01.patch, YARN-3142.02.patch, 
> YARN-3142.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-23 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517071#comment-15517071
 ] 

Varun Saxena commented on YARN-5662:


* In the call to GenericTestUtils#waitFor, the 2nd and 3rd parameters (500 and 
200) should be interchanged.
* The test failure seems related. I think we need to shutdown 
DefaultMetricsSystem or simply use another container ID in the test to get rid 
of the error.


> Provide an option to enable ContainerMonitor 
> -
>
> Key: YARN-5662
> URL: https://issues.apache.org/jira/browse/YARN-5662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5662.1.patch, YARN-5662.2.patch
>
>
> Currently, if vmem/pmem check is not enabled, ContainerMonitor would not run. 
>  In certain cases, ContainerMonitor also needs to run to monitor things like 
> container-metrics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-23 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517073#comment-15517073
 ] 

Arun Suresh commented on YARN-3139:
---

Thanks for the patch [~templedf]..

AbstractYarnScheduler:
# {{AbstractYarnScheduler::getApplicationAttempt()}} : Do you need the read 
lock, since _applications_ is already a ConcurrentHashMap ?
# {{AbstractYarnScheduler::moveAllApps()}} : Don't think we need a writelock 
there since the only state change is affected via an {{RMAppMoveEvent}}. A 
readLock should suffice, but if _getAppsInQueue_ is synchronized, even that 
might not be required.
# Same for the {{AbstractYarnScheduler:: killAllAppsInQueue()}}

SchedulerApplicationAttempt:
# {{SchedulerApplicationAttempt::pendingRelease}} : Maybe use a 
ConcurrentSkipListSet ?

CapacityScheduler:
# Considering that initScheduler is called exactly once, and the Scheduler 
cannot be used before it is initted, do we need a write lock there ?
# Same goes for {{startSchedulerThreads()}} and {{stopService()}}.. 
[~leftnoteasy] Thoughts ?

FairScheduler:
# In multiple places, maybe it is possible to reduce the granuality of the 
write locks... for eg. in {{FairScheduler::addApplication()}}, you dont really 
need the lock before line 664.
# {{removeApplication()}}, do you need to do a get and then remove... the 
remove will give you the previous app and you can log if null right ? Then you 
wont event need a lock since *applications* is a concurrent Hashmap.
# In _removeApplicationAttempt_
{code}
SchedulerApplication application =
applications.get(applicationAttemptId.getApplicationId());
FSAppAttempt attempt = getSchedulerApp(applicationAttemptId);
{code}
can be replaced with 
{{applications.get(applicationAttemptId.getApplicationId()).getCurrentAppAttempt()}}
 right?



> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5662) Provide an option to enable ContainerMonitor

2016-09-23 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517071#comment-15517071
 ] 

Varun Saxena edited comment on YARN-5662 at 9/23/16 5:38 PM:
-

Couple of comments.
* In the call to GenericTestUtils#waitFor, the 2nd and 3rd parameters (500 and 
200) should be interchanged.
* The test failure seems related. I think we need to shutdown 
DefaultMetricsSystem or simply use another container ID in the test to get rid 
of the error.



was (Author: varun_saxena):
* In the call to GenericTestUtils#waitFor, the 2nd and 3rd parameters (500 and 
200) should be interchanged.
* The test failure seems related. I think we need to shutdown 
DefaultMetricsSystem or simply use another container ID in the test to get rid 
of the error.


> Provide an option to enable ContainerMonitor 
> -
>
> Key: YARN-5662
> URL: https://issues.apache.org/jira/browse/YARN-5662
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-5662.1.patch, YARN-5662.2.patch
>
>
> Currently, if vmem/pmem check is not enabled, ContainerMonitor would not run. 
>  In certain cases, ContainerMonitor also needs to run to monitor things like 
> container-metrics. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-09-23 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517076#comment-15517076
 ] 

Daniel Templeton commented on YARN-4464:


How about we deprecate in branch-2?  It's already unused there.

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch, YARN-4464.005.patch, 
> YARN-4464.006.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517109#comment-15517109
 ] 

Karthik Kambatla commented on YARN-4464:


We should deprecate in branch-2 and not use it, but that does not absolve us of 
deprecating in trunk also. BTW, mind posting a patch for branch-2 as well for 
the change outside of changing default value? 

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch, YARN-4464.005.patch, 
> YARN-4464.006.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-23 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517172#comment-15517172
 ] 

Wangda Tan commented on YARN-3139:
--

Thanks [~asuresh] for the review, actually I'm working on the patch :)

bq. AbstractYarnScheduler::getApplicationAttempt() : Do you need the read lock 
...
Agree, fixed

bq. AbstractYarnScheduler::moveAllApps() : Don't think we need a writelock...
The reason I put a writelock because we want the move operation to be 
serialized, for example, it may have some issue when we do two moveAll 
operation at the same time. To me moveApp is equivalent to remove the app and 
add the new one. Same as we protect addAppAttempt, we should use the same 
writelock for safety.

bq. SchedulerApplicationAttempt::pendingRelease :Maybe use a 
ConcurrentSkipListSet ?
Unless we need to order the set, I think use ConcurrentHashSet should be 
enough, SkiplistSet gets worse performance (O(logN) vs. O(1))

bq. Considering that initScheduler is called exactly once, and the Scheduler 
cannot be used before it is initted, do we need a write lock there ?
I would prefer keep it just for safety and consistency, as it doesn't affect 
performance. Same for stop and startSchedulerThreads

bq. FairScheduler 2.. 3..
Addressed

And [~jianhe],
I revisited {{checkSchedContainerChangeRequest}}, it is actually safe, so I 
removed comment of the original method.

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-09-23 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517030#comment-15517030
 ] 

Daniel Templeton commented on YARN-4464:


Changing the defaults is already incompatible, so this is only going into 
trunk/3.0.  Do we also need to worry about removing the constant in that case?

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch, YARN-4464.005.patch, 
> YARN-4464.006.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-09-23 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517051#comment-15517051
 ] 

Karthik Kambatla commented on YARN-4464:


I find it silly that anyone would be using these constants. That said, if 
people do, their code will need changing.

Our compatibility guidelines say we can't remove @Public @Stable APIs without 
deprecating them for a major release. 

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch, YARN-4464.005.patch, 
> YARN-4464.006.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3139) Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler

2016-09-23 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-3139:
-
Attachment: YARN-3139.3.patch

Attached ver.3 patch.

> Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler
> --
>
> Key: YARN-3139
> URL: https://issues.apache.org/jira/browse/YARN-3139
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-3139.0.patch, YARN-3139.1.patch, YARN-3139.2.patch, 
> YARN-3139.3.patch
>
>
> Enhance locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler, as 
> mentioned in YARN-3091, a possible solution is using read/write lock. Other 
> fine-graind locks for specific purposes / bugs should be addressed in 
> separated tickets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-23 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517482#comment-15517482
 ] 

Sangjin Lee commented on YARN-5585:
---

Thanks [~varun_saxena] for summarizing the discussion clearly. The description 
is accurate.

The entity id prefix is optional. If you're happy with the entity id order, 
there is nothing for the framework to do. If you want a different sort order 
than the entity id order, the framework should provide the prefix values on 
write.

Another point about this alternate sort order is basically fixed per framework. 
In other words, you don't change this once a framework adopted a certain 
natural sort order. If you want to resort dynamically, then we're really 
talking about reading all entities and sorting on the client side (i.e. 
browser). That is essentially the current YARN web UI behavior.

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5525) Make log aggregation service class configurable

2016-09-23 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517484#comment-15517484
 ] 

Subru Krishnan commented on YARN-5525:
--

Thanks [~botong] for working on this. I took a look at the patch and want to 
understand your reasoning behind making the whole {{LogAggregationService}} 
pluggable as I feel we can achieve the intent of the JIRA by simply making 
{{AppLogAggregator}} implementation pluggable in {{LogAggregationService}}? 
This should be much simpler/cleaner to do as it's limited to 
*LogAggregationService::initAppAggregator*.

> Make log aggregation service class configurable
> ---
>
> Key: YARN-5525
> URL: https://issues.apache.org/jira/browse/YARN-5525
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Reporter: Giovanni Matteo Fumarola
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-5525.v1.patch, YARN-5525.v2.patch, 
> YARN-5525.v3.patch
>
>
> Make the log aggregation class configurable and extensible, so that 
> alternative log aggregation behaviors like app specific log aggregation 
> directory, log aggregation format can be implemented and plugged in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Add a new filter fromId in REST endpoints

2016-09-23 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517489#comment-15517489
 ] 

Sangjin Lee commented on YARN-5585:
---

I do have one question for us to think about. With the entity id prefix, it 
becomes a requirement that the framework provide the entity id prefix on all 
writes, including *updates* to existing entities. This might become an 
interesting challenge.

For example, if tez wanted to use the created time of a vertex as its entity id 
prefix (for sorting), then even for subsequent updates of a vertex entity, tez 
would need to pass the created time, or it would be difficult to write.

[~vrushalic], [~jrottinghuis], thoughts?

[~rohithsharma], is it feasible for tez to provide created time every time a 
tez entity is created or *updated*?

> [Atsv2] Add a new filter fromId in REST endpoints
> -
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-09-23 Thread Sreenath Somarajapuram (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15517498#comment-15517498
 ] 

Sreenath Somarajapuram commented on YARN-5145:
--

[~leftnoteasy]

config.env, default-config.js and environment.js were initially created with 
different purposes.
# config.env - For configuring host URLs and other values respective to the 
machine the UI is hosted in. It would be a separate file, and wont be imploded 
by the build script. Hence when the UI is hosted in tomcat or some similar 
server, the user can go and configure the UI. In other words, this file is used 
to configure UI in the deployment phase.
# default-config.js - Contains default values of the above + other constants 
internal to the UI (Like REST end point namespaces). This would be imploded and 
minified by the build script. Hence this file can be changed only before the 
build, or in the development phase. 
# environment.js - This is Ember's standard file, and contain Ember related 
environment variables. They define how ember works.

In short. If we remove config.env. Someone who want to host our UI war in 
his/her tomcat or similar servers would find it hard to get the UI working.

Now accessing the configuration:
# Production - UI hosted in RM : As mentioned by [~cheersyang], the UI can load 
configurations from RM's conf endpoint.
# Production - UI hosted in a web server (tomcat) : config.env can be used to 
configure the host URLs. (HADOOP_CONF_DIR could be out of reach as the machine 
its hosted in need not be even a part of the cluster)
# Production - wrapped : Things would be different if it is viewed inside other 
web interfaces like Ambari. We would have to take the configurations from REST 
end points provided by the respective interfaces. Guess that is out of scope 
now.
# Development : config.env can be used to configure the host URLs.

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
> Attachments: YARN-5145-YARN-3368.01.patch
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org