[jira] [Commented] (YARN-5111) YARN container system metrics are not aggregated to application

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305186#comment-15305186
 ] 

Sangjin Lee commented on YARN-5111:
---

+1 on the patch. I tested it with a pseudo-distributed setup, and I see the 
memory and CPU correctly aggregated to the application.

Will commit it shortly.

> YARN container system metrics are not aggregated to application
> ---
>
> Key: YARN-5111
> URL: https://issues.apache.org/jira/browse/YARN-5111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Naganarasimha G R
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5111-YARN-2928.v1.001.patch
>
>
> It appears that the container system metrics (CPU and memory) are not being 
> aggregated onto the application.
> I definitely see container system metrics when I query for YARN_CONTAINER. 
> However, there is no corresponding metrics on the parent application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5111) YARN container system metrics are not aggregated to application

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305181#comment-15305181
 ] 

Hadoop QA commented on YARN-5111:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
58s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 9s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 53s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806568/YARN-5111-YARN-2928.v1.001.patch
 |
| JIRA Issue | YARN-5111 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1a7cee622640 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Comment Edited] (YARN-5178) yarn application never can be killed when failover resource manager

2016-05-27 Thread tu nguyen khac (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305163#comment-15305163
 ] 

tu nguyen khac edited comment on YARN-5178 at 5/28/16 5:04 AM:
---

Sorry Jun Gong, it 's my mistake , i didn't stop cluster to get log , and so 
many other application ran , RS log is quite chao :D :D ,  it 's make too hard 
to read log , but i try to attached here , please find app : 
application_1464374175189_0016  to read log


was (Author: tuyuri):
Sorry Jun Gong, it 's my mistake , i didn't stop cluster to get log , and so 
many other application ran , RS log is quite chaos :D :D ,  it 's make too hard 
to reading log , but i try to attached here , please find app : 
application_1464374175189_0016  to read log

> yarn application never can be killed when failover resource manager
> ---
>
> Key: YARN-5178
> URL: https://issues.apache.org/jira/browse/YARN-5178
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: tu nguyen khac
>Priority: Minor
> Attachments: rs1.zip, rs2.zip
>
>
> Dear all 
> problem i detected is that : 
> In my cluster enviroment ( 16 nodes , 2 ResourceManager  , HA ) 
> When an application are submitted in resource manager (Rs )  1st , suddenly 
> that Rs1 machine are hang , this application is failover to Rs2 but it never 
> can be run : 
> Name: cpaBidEcom
> Application Type: SPARK
> Application Tags: 
> State:ACCEPTED
> FinalStatus:  UNDEFINED
> Started:  28-May-2016 01:46:13
> Elapsed:  7hrs, 35mins, 32sec
> Tracking URL: UNASSIGNED
> after that our developer try to kill this application by command : 
> yarn application -kill app_
> we retried this output forever : 
> 16/05/28 09:24:48 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:50 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:52 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:54 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:56 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:58 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:00 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:02 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:04 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:06 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:08 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:10 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:12 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:14 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:16 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> I think it probably a bug . It 's hard to reproduce it but please review it 
> for me



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5178) yarn application never can be killed when failover resource manager

2016-05-27 Thread tu nguyen khac (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305163#comment-15305163
 ] 

tu nguyen khac edited comment on YARN-5178 at 5/28/16 4:55 AM:
---

Sorry Jun Gong, it 's my mistake , i didn't stop cluster to get log , and so 
many other application ran , RS log is quite chaos :D :D ,  it 's make too hard 
to reading log , but i try to attached here , please find app 


was (Author: tuyuri):
Sorry Jun Gong, it 's my mistake , i didn't stop cluster to get log , and so 
many other application ran , RS log is quite chaos :D :D ,  it 's make too hard 
to reading log , but i try to attached here 

> yarn application never can be killed when failover resource manager
> ---
>
> Key: YARN-5178
> URL: https://issues.apache.org/jira/browse/YARN-5178
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: tu nguyen khac
>Priority: Minor
> Attachments: rs1.zip, rs2.zip
>
>
> Dear all 
> problem i detected is that : 
> In my cluster enviroment ( 16 nodes , 2 ResourceManager  , HA ) 
> When an application are submitted in resource manager (Rs )  1st , suddenly 
> that Rs1 machine are hang , this application is failover to Rs2 but it never 
> can be run : 
> Name: cpaBidEcom
> Application Type: SPARK
> Application Tags: 
> State:ACCEPTED
> FinalStatus:  UNDEFINED
> Started:  28-May-2016 01:46:13
> Elapsed:  7hrs, 35mins, 32sec
> Tracking URL: UNASSIGNED
> after that our developer try to kill this application by command : 
> yarn application -kill app_
> we retried this output forever : 
> 16/05/28 09:24:48 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:50 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:52 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:54 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:56 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:58 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:00 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:02 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:04 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:06 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:08 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:10 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:12 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:14 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:16 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> I think it probably a bug . It 's hard to reproduce it but please review it 
> for me



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5178) yarn application never can be killed when failover resource manager

2016-05-27 Thread tu nguyen khac (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305163#comment-15305163
 ] 

tu nguyen khac edited comment on YARN-5178 at 5/28/16 4:56 AM:
---

Sorry Jun Gong, it 's my mistake , i didn't stop cluster to get log , and so 
many other application ran , RS log is quite chaos :D :D ,  it 's make too hard 
to reading log , but i try to attached here , please find app : 
application_1464374175189_0016  to read log


was (Author: tuyuri):
Sorry Jun Gong, it 's my mistake , i didn't stop cluster to get log , and so 
many other application ran , RS log is quite chaos :D :D ,  it 's make too hard 
to reading log , but i try to attached here , please find app 

> yarn application never can be killed when failover resource manager
> ---
>
> Key: YARN-5178
> URL: https://issues.apache.org/jira/browse/YARN-5178
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: tu nguyen khac
>Priority: Minor
> Attachments: rs1.zip, rs2.zip
>
>
> Dear all 
> problem i detected is that : 
> In my cluster enviroment ( 16 nodes , 2 ResourceManager  , HA ) 
> When an application are submitted in resource manager (Rs )  1st , suddenly 
> that Rs1 machine are hang , this application is failover to Rs2 but it never 
> can be run : 
> Name: cpaBidEcom
> Application Type: SPARK
> Application Tags: 
> State:ACCEPTED
> FinalStatus:  UNDEFINED
> Started:  28-May-2016 01:46:13
> Elapsed:  7hrs, 35mins, 32sec
> Tracking URL: UNASSIGNED
> after that our developer try to kill this application by command : 
> yarn application -kill app_
> we retried this output forever : 
> 16/05/28 09:24:48 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:50 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:52 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:54 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:56 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:58 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:00 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:02 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:04 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:06 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:08 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:10 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:12 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:14 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:16 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> I think it probably a bug . It 's hard to reproduce it but please review it 
> for me



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5178) yarn application never can be killed when failover resource manager

2016-05-27 Thread tu nguyen khac (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tu nguyen khac updated YARN-5178:
-
Attachment: rs2.zip

Rs2 log

> yarn application never can be killed when failover resource manager
> ---
>
> Key: YARN-5178
> URL: https://issues.apache.org/jira/browse/YARN-5178
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: tu nguyen khac
>Priority: Minor
> Attachments: rs1.zip, rs2.zip
>
>
> Dear all 
> problem i detected is that : 
> In my cluster enviroment ( 16 nodes , 2 ResourceManager  , HA ) 
> When an application are submitted in resource manager (Rs )  1st , suddenly 
> that Rs1 machine are hang , this application is failover to Rs2 but it never 
> can be run : 
> Name: cpaBidEcom
> Application Type: SPARK
> Application Tags: 
> State:ACCEPTED
> FinalStatus:  UNDEFINED
> Started:  28-May-2016 01:46:13
> Elapsed:  7hrs, 35mins, 32sec
> Tracking URL: UNASSIGNED
> after that our developer try to kill this application by command : 
> yarn application -kill app_
> we retried this output forever : 
> 16/05/28 09:24:48 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:50 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:52 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:54 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:56 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:58 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:00 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:02 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:04 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:06 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:08 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:10 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:12 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:14 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:16 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> I think it probably a bug . It 's hard to reproduce it but please review it 
> for me



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5178) yarn application never can be killed when failover resource manager

2016-05-27 Thread tu nguyen khac (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tu nguyen khac updated YARN-5178:
-
Attachment: rs1.zip

RS1 log

> yarn application never can be killed when failover resource manager
> ---
>
> Key: YARN-5178
> URL: https://issues.apache.org/jira/browse/YARN-5178
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: tu nguyen khac
>Priority: Minor
> Attachments: rs1.zip
>
>
> Dear all 
> problem i detected is that : 
> In my cluster enviroment ( 16 nodes , 2 ResourceManager  , HA ) 
> When an application are submitted in resource manager (Rs )  1st , suddenly 
> that Rs1 machine are hang , this application is failover to Rs2 but it never 
> can be run : 
> Name: cpaBidEcom
> Application Type: SPARK
> Application Tags: 
> State:ACCEPTED
> FinalStatus:  UNDEFINED
> Started:  28-May-2016 01:46:13
> Elapsed:  7hrs, 35mins, 32sec
> Tracking URL: UNASSIGNED
> after that our developer try to kill this application by command : 
> yarn application -kill app_
> we retried this output forever : 
> 16/05/28 09:24:48 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:50 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:52 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:54 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:56 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:58 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:00 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:02 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:04 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:06 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:08 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:10 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:12 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:14 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:16 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> I think it probably a bug . It 's hard to reproduce it but please review it 
> for me



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5178) yarn application never can be killed when failover resource manager

2016-05-27 Thread tu nguyen khac (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305163#comment-15305163
 ] 

tu nguyen khac commented on YARN-5178:
--

Sorry Jun Gong, it 's my mistake , i didn't stop cluster to get log , and so 
many other application ran , RS log is quite chaos :D :D ,  it 's make too hard 
to reading log , but i try to attached here 

> yarn application never can be killed when failover resource manager
> ---
>
> Key: YARN-5178
> URL: https://issues.apache.org/jira/browse/YARN-5178
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: tu nguyen khac
>Priority: Minor
> Attachments: rs1.zip
>
>
> Dear all 
> problem i detected is that : 
> In my cluster enviroment ( 16 nodes , 2 ResourceManager  , HA ) 
> When an application are submitted in resource manager (Rs )  1st , suddenly 
> that Rs1 machine are hang , this application is failover to Rs2 but it never 
> can be run : 
> Name: cpaBidEcom
> Application Type: SPARK
> Application Tags: 
> State:ACCEPTED
> FinalStatus:  UNDEFINED
> Started:  28-May-2016 01:46:13
> Elapsed:  7hrs, 35mins, 32sec
> Tracking URL: UNASSIGNED
> after that our developer try to kill this application by command : 
> yarn application -kill app_
> we retried this output forever : 
> 16/05/28 09:24:48 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:50 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:52 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:54 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:56 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:24:58 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:00 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:02 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:04 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:06 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:08 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:10 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:12 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:14 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> 16/05/28 09:25:16 INFO impl.YarnClientImpl: Waiting for application 
> application_1464374175189_0016 to be killed.
> I think it probably a bug . It 's hard to reproduce it but please review it 
> for me



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5178) yarn application never can be killed when failover resource manager

2016-05-27 Thread tu nguyen khac (JIRA)
tu nguyen khac created YARN-5178:


 Summary: yarn application never can be killed when failover 
resource manager
 Key: YARN-5178
 URL: https://issues.apache.org/jira/browse/YARN-5178
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.6.0
Reporter: tu nguyen khac
Priority: Minor


Dear all 

problem i detected is that : 

In my cluster enviroment ( 16 nodes , 2 ResourceManager  , HA ) 

When an application are submitted in resource manager (Rs )  1st , suddenly 
that Rs1 machine are hang , this application is failover to Rs2 but it never 
can be run : 

Name:   cpaBidEcom
Application Type:   SPARK
Application Tags:   
State:  ACCEPTED
FinalStatus:UNDEFINED
Started:28-May-2016 01:46:13
Elapsed:7hrs, 35mins, 32sec
Tracking URL:   UNASSIGNED


after that our developer try to kill this application by command : 

yarn application -kill app_

we retried this output forever : 

16/05/28 09:24:48 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:24:50 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:24:52 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:24:54 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:24:56 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:24:58 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:25:00 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:25:02 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:25:04 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:25:06 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:25:08 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:25:10 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:25:12 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:25:14 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.
16/05/28 09:25:16 INFO impl.YarnClientImpl: Waiting for application 
application_1464374175189_0016 to be killed.

I think it probably a bug . It 's hard to reproduce it but please review it for 
me




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305064#comment-15305064
 ] 

Hudson commented on YARN-5117:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9881 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9881/])
YARN-5117. QueuingContainerManager does not start GUARANTEED Container (arun 
suresh: rev 4fc09a897b25914a9b9321cc443f3f3ff3d776d5)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/queuing/QueuingContainerManagerImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/queuing/TestQueuingContainerManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java


> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0
>
> Attachments: YARN-5117.001.patch, YARN-5117.002.patch, 
> YARN-5117.003.patch, YARN-5117.004.patch
>
>
> When NM Queuing is turned on, it looks like GUARANTEED containers do not 
> start even when there are no containers running on the NM. The following is 
> seen in the logs :
> {noformat}
> .
> 2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
> queuing.QueuingContainerManagerImpl 
> (QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
> There are no sufficient resources to start guaranteed 
> container_1463711648301_0001_01_01 even after attempting to kill any 
> running opportunistic containers.
> .
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305058#comment-15305058
 ] 

Arun Suresh commented on YARN-5117:
---

Thanks for the fix [~kkaranasos] !!
+1, LGTM

> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5117.001.patch, YARN-5117.002.patch, 
> YARN-5117.003.patch, YARN-5117.004.patch
>
>
> When NM Queuing is turned on, it looks like GUARANTEED containers do not 
> start even when there are no containers running on the NM. The following is 
> seen in the logs :
> {noformat}
> .
> 2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
> queuing.QueuingContainerManagerImpl 
> (QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
> There are no sufficient resources to start guaranteed 
> container_1463711648301_0001_01_01 even after attempting to kill any 
> running opportunistic containers.
> .
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305055#comment-15305055
 ] 

Hadoop QA commented on YARN-5117:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 0 new + 29 unchanged - 4 fixed = 29 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 55s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806770/YARN-5117.004.patch |
| JIRA Issue | YARN-5117 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6432eeb064dc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21890c4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11753/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11753/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>   

[jira] [Commented] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305048#comment-15305048
 ] 

Hadoop QA commented on YARN-5077:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 24s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806762/YARN-5077.006.patch |
| JIRA Issue | YARN-5077 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 13daf65b038b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21890c4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11751/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11751/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11751/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11751/console |
| Powered 

[jira] [Created] (YARN-5177) Make Node-Manager Download-Resource Component extensible.

2016-05-27 Thread Emeka (JIRA)
Emeka created YARN-5177:
---

 Summary: Make Node-Manager Download-Resource Component extensible.
 Key: YARN-5177
 URL: https://issues.apache.org/jira/browse/YARN-5177
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager
Affects Versions: 2.7.0
Reporter: Emeka
Priority: Minor


Problem:
- Downloading files to a local machine/node is called "resource-localization".
- There are two components that perform resource-location (PublicLocalizer and 
ComponetLocalizers)
- Both components utilizes FSDownload.class to perform their downloads.
- We need a custom implementation of FSDownload.

Solution:
- With this change, we make FSDownload.class extensible by wrapping it in a new 
ResourceDownloader.interface
- We also update the PublicLocalizer and ComponetLocalizers to load 
ResourceDownloader rather than FSDownload.
- NOTE: We use reflection to load the right implementation during runtime.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5117:
-
Attachment: YARN-5117.004.patch

Fixing checkstyle.

> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5117.001.patch, YARN-5117.002.patch, 
> YARN-5117.003.patch, YARN-5117.004.patch
>
>
> When NM Queuing is turned on, it looks like GUARANTEED containers do not 
> start even when there are no containers running on the NM. The following is 
> seen in the logs :
> {noformat}
> .
> 2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
> queuing.QueuingContainerManagerImpl 
> (QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
> There are no sufficient resources to start guaranteed 
> container_1463711648301_0001_01_01 even after attempting to kill any 
> running opportunistic containers.
> .
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305029#comment-15305029
 ] 

Hadoop QA commented on YARN-5105:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 48s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
53s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} YARN-2928 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 in YARN-2928 has 30 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: patch 
generated 7 new + 11 unchanged - 7 fixed = 18 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 45s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 17s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 7s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed 

[jira] [Comment Edited] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-27 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305014#comment-15305014
 ] 

Carlo Curino edited comment on YARN-5164 at 5/28/16 12:12 AM:
--

Given some convo with [~chris.douglas] there might be a workaround if we avoid 
to report the exact point of violation. The intuition is to compute separately 
the entire set of integral "additions" and "removals" separately, and then use 
our usual {{RLESparseResourceAllocation.merge()}} to compute the correct 
integral and compare it to the avg constraint. This could work (passes all the 
tests), but we should dig deeper in corner-cases, as it is tricky. Posting an 
initial patch (fresh our of hacking it) to see if folks can help spot issues. I 
will circle back on this.

There is however a risk of false-positive/false-negatives during the 
transitions (as we are not capturing the "slope" of the integral, per our 
previous comment). 



was (Author: curino):
Given some convo with [~chris.douglas] there might be a workaround if we avoid 
to report the exact point of violation. The intuition is to compute separately 
the entire set of integral "additions" and "removals" separately, and then use 
our usual {{RLESparseResourceAllocation.merge()}} to compute the correct 
integral and compare it to the avg constraint. This could work (passes all the 
tests), but we should dig deeper in corner-cases, as it is tricky. Posting an 
initial patch (fresh our of hacking it) to see if folks can help spot issues. I 
will circle back on this.


> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf, YARN-5164.1.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305023#comment-15305023
 ] 

Hadoop QA commented on YARN-5117:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 2 new + 29 unchanged - 4 fixed = 31 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 53s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806761/YARN-5117.003.patch |
| JIRA Issue | YARN-5117 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e608f2e7b013 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21890c4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11752/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11752/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11752/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: 

[jira] [Commented] (YARN-4271) Make the NodeManager's health checker service pluggable

2016-05-27 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305017#comment-15305017
 ] 

Subru Krishnan commented on YARN-4271:
--


[~rkanter] (cc: [~rchiang]), when we originally opened the JIRA we were 
thinking more along the lines of what  [~kasha] pointed out , i.e.:
  # make the {{LocalDirsHandlerService}} pluggable as obviously we have 
behavior specific to cloud setting. For e.g: as [~rmohan] pointed out earlier, 
we receive health signals (inc. disks) from Azure which we need to pipe through 
to the NM.
  # make the {{NodeHealthCheckerService}} more smarter instead of a boolean 
healthy/unhealthy state.

We were just looking into YARN-3503 and looks that will suffice for (2) 
presently. So I agree that we can keep it simple for now and reopen YARN-5137 
as it will cover (1). We can always revisit in case we hit a scenario in future 
which is not covered.



> Make the NodeManager's health checker service pluggable
> ---
>
> Key: YARN-4271
> URL: https://issues.apache.org/jira/browse/YARN-4271
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Subru Krishnan
>Assignee: Raghav Mohan
>Priority: Minor
>
> This JIRA proposes making the NodeHealthCheckerService in the NM pluggable as 
> in cloud environments like Azure we want to tap into the provided health 
> checkers for disk and other service signal statuses. The idea is to extend 
> the existing NodeHealthCheckerService and hook in custom implementation to 
> evaluate if the node is healthy or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-27 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305014#comment-15305014
 ] 

Carlo Curino commented on YARN-5164:


Given some convo with [~chris.douglas] there might be a workaround if we avoid 
to report the exact point of violation. The intuition is to compute separately 
the entire set of integral "additions" and "removals" separately, and then use 
our usual {{RLESparseResourceAllocation.merge()}} to compute the correct 
integral and compare it to the avg constraint. This could work (passes all the 
tests), but we should dig deeper in corner-cases, as it is tricky. Posting an 
initial patch (fresh our of hacking it) to see if folks can help spot issues. I 
will circle back on this.


> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf, YARN-5164.1.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-27 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5164:
---
Attachment: YARN-5164.1.patch

> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf, YARN-5164.1.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0

2016-05-27 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305000#comment-15305000
 ] 

Yufei Gu edited comment on YARN-5077 at 5/27/16 11:49 PM:
--

Uploaded patch 006 to update the way we calculate maxAMShare, so that a 
zero-weight AM can get resources no matter whether there are active non-zero 
weight queues. 


was (Author: yufeigu):
Uploaded patch 006 to update the way we calculate maxAMShare. 

> Fix FSLeafQueue#getFairShare() for queues with weight 0.0
> -
>
> Key: YARN-5077
> URL: https://issues.apache.org/jira/browse/YARN-5077
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5077.001.patch, YARN-5077.002.patch, 
> YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, 
> YARN-5077.006.patch
>
>
> 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns 
>  
> 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns 
> 
> In case 1), that means no container ever gets allocated for an AM because 
> from the viewpoint of the RM, there is never any headroom to allocate a 
> container on that queue.
> For example, we have a pool with the following weights: 
> - root.dev 0.0 
> - root.product 1.0
> The root.dev is a best effort pool and should only get resources if 
> root.product is not running. In our tests, with no jobs running under 
> root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and 
> never start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0

2016-05-27 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305000#comment-15305000
 ] 

Yufei Gu commented on YARN-5077:


Uploaded patch 006 to update the way we calculate maxAMShare. 

> Fix FSLeafQueue#getFairShare() for queues with weight 0.0
> -
>
> Key: YARN-5077
> URL: https://issues.apache.org/jira/browse/YARN-5077
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5077.001.patch, YARN-5077.002.patch, 
> YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, 
> YARN-5077.006.patch
>
>
> 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns 
>  
> 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns 
> 
> In case 1), that means no container ever gets allocated for an AM because 
> from the viewpoint of the RM, there is never any headroom to allocate a 
> container on that queue.
> For example, we have a pool with the following weights: 
> - root.dev 0.0 
> - root.product 1.0
> The root.dev is a best effort pool and should only get resources if 
> root.product is not running. In our tests, with no jobs running under 
> root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and 
> never start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5077) Fix FSLeafQueue#getFairShare() for queues with weight 0.0

2016-05-27 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5077:
---
Attachment: YARN-5077.006.patch

> Fix FSLeafQueue#getFairShare() for queues with weight 0.0
> -
>
> Key: YARN-5077
> URL: https://issues.apache.org/jira/browse/YARN-5077
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5077.001.patch, YARN-5077.002.patch, 
> YARN-5077.003.patch, YARN-5077.004.patch, YARN-5077.005.patch, 
> YARN-5077.006.patch
>
>
> 1) When a queue's weight is set to 0.0, FSLeafQueue#getFairShare() returns 
>  
> 2) When a queue's weight is nonzero, FSLeafQueue#getFairShare() returns 
> 
> In case 1), that means no container ever gets allocated for an AM because 
> from the viewpoint of the RM, there is never any headroom to allocate a 
> container on that queue.
> For example, we have a pool with the following weights: 
> - root.dev 0.0 
> - root.product 1.0
> The root.dev is a best effort pool and should only get resources if 
> root.product is not running. In our tests, with no jobs running under 
> root.product, jobs started in root.dev queue stay stuck in ACCEPT phase and 
> never start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5117:
-
Attachment: YARN-5117.003.patch

Getting rid of the {{waitForContainerState}} in the 
{{TestQueuingContainerManagerImpl}} that was causing timing issues.

> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5117.001.patch, YARN-5117.002.patch, 
> YARN-5117.003.patch
>
>
> When NM Queuing is turned on, it looks like GUARANTEED containers do not 
> start even when there are no containers running on the NM. The following is 
> seen in the logs :
> {noformat}
> .
> 2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
> queuing.QueuingContainerManagerImpl 
> (QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
> There are no sufficient resources to start guaranteed 
> container_1463711648301_0001_01_01 even after attempting to kill any 
> running opportunistic containers.
> .
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304990#comment-15304990
 ] 

Hadoop QA commented on YARN-5117:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 0 new + 29 unchanged - 4 fixed = 29 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 37s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 0s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806751/YARN-5117.002.patch |
| JIRA Issue | YARN-5117 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c2e4a2b0eb13 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21890c4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11744/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11744/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11744/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11744/console 

[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304978#comment-15304978
 ] 

Hadoop QA commented on YARN-5105:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 22s 
{color} | {color:red} Docker failed to build yetus/hadoop:cf2ee45. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806747/YARN-5105-YARN-2928.04.patch
 |
| JIRA Issue | YARN-5105 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11749/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304977#comment-15304977
 ] 

Hadoop QA commented on YARN-5105:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 4m 58s 
{color} | {color:red} Docker failed to build yetus/hadoop:cf2ee45. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806747/YARN-5105-YARN-2928.04.patch
 |
| JIRA Issue | YARN-5105 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11747/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304971#comment-15304971
 ] 

Hadoop QA commented on YARN-5105:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 28s 
{color} | {color:red} Docker failed to build yetus/hadoop:cf2ee45. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806747/YARN-5105-YARN-2928.04.patch
 |
| JIRA Issue | YARN-5105 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11743/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304962#comment-15304962
 ] 

Hadoop QA commented on YARN-5117:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 49s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 0 new + 29 unchanged - 4 fixed = 29 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 6s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 3s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806751/YARN-5117.002.patch |
| JIRA Issue | YARN-5117 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 998abbb727c9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21890c4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11742/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11742/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11742/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11742/console |

[jira] [Commented] (YARN-5169) most YARN events have timestamp of -1

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304960#comment-15304960
 ] 

Sangjin Lee commented on YARN-5169:
---

bq. This is also my concern. For NMs, my hunch is we have much less events 
generated, therefore performance problem should be less severe?

IMO the JVM is past the point of developers having to worry about the 
performance of {{System.currentTimeMillis()}}. That may have been a legitimate 
concern 5 years ago, but probably not any longer, no?

> most YARN events have timestamp of -1
> -
>
> Key: YARN-5169
> URL: https://issues.apache.org/jira/browse/YARN-5169
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: Sangjin Lee
>
> Most of the YARN events (subclasses of {{AbstractEvent}}) have timestamp of 
> -1. {{AbstractEvent}} have two constructors, one that initializes the 
> timestamp to -1 and the other to the caller-provided value. But most events 
> use the former (thus timestamp of -1).
> Some of the more common events, including {{ApplicationEvent}}, 
> {{ContainerEvent}}, {{JobEvent}}, etc. do not set the timestamp.
> The rationale for this behavior seems to be mentioned in {{AbstractEvent}}:
> {code}
>   // use this if you DON'T care about the timestamp
>   public AbstractEvent(TYPE type) {
> this.type = type;
> // We're not generating a real timestamp here.  It's too expensive.
> timestamp = -1L;
>   }
> {code}
> This absence of the timestamp isn't really visible in many cases and 
> therefore may have gone unnoticed, but the timeline service exposes this 
> problem very visibly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5111) YARN container system metrics are not aggregated to application

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304955#comment-15304955
 ] 

Sangjin Lee commented on YARN-5111:
---

I'll test it as part of my review.

> YARN container system metrics are not aggregated to application
> ---
>
> Key: YARN-5111
> URL: https://issues.apache.org/jira/browse/YARN-5111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Naganarasimha G R
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5111-YARN-2928.v1.001.patch
>
>
> It appears that the container system metrics (CPU and memory) are not being 
> aggregated onto the application.
> I definitely see container system metrics when I query for YARN_CONTAINER. 
> However, there is no corresponding metrics on the parent application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5098) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304953#comment-15304953
 ] 

Hadoop QA commented on YARN-5098:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 7 new + 88 unchanged - 0 fixed = 95 total (was 88) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 49s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806746/YARN-5098.1.patch |
| JIRA Issue | YARN-5098 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 921a56fd862a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 21890c4 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11740/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11740/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  

[jira] [Updated] (YARN-2882) Add an OPPORTUNISTIC ExecutionType

2016-05-27 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-2882:
--
Assignee: Konstantinos Karanasos  (was: Arun Suresh)

> Add an OPPORTUNISTIC ExecutionType
> --
>
> Key: YARN-2882
> URL: https://issues.apache.org/jira/browse/YARN-2882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0
>
> Attachments: YARN-2882-yarn-2877.001.patch, 
> YARN-2882-yarn-2877.002.patch, YARN-2882-yarn-2877.003.patch, 
> YARN-2882-yarn-2877.004.patch, YARN-2882.005.patch, yarn-2882.patch
>
>
> This JIRA introduces the notion of container types.
> We propose two initial types of containers: guaranteed-start and queueable 
> containers.
> Guaranteed-start are the existing containers, which are allocated by the 
> central RM and are instantaneously started, once allocated.
> Queueable is a new type of container, which allows containers to be queued in 
> the NM, thus their execution may be arbitrarily delayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304951#comment-15304951
 ] 

Hadoop QA commented on YARN-5105:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
31s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} YARN-2928 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 in YARN-2928 has 30 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: patch 
generated 7 new + 11 unchanged - 7 fixed = 18 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 4s {color} | 
{color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch failed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 12s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed with 

[jira] [Assigned] (YARN-2882) Add an OPPORTUNISTIC ExecutionType

2016-05-27 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reassigned YARN-2882:
-

Assignee: Arun Suresh  (was: Konstantinos Karanasos)

> Add an OPPORTUNISTIC ExecutionType
> --
>
> Key: YARN-2882
> URL: https://issues.apache.org/jira/browse/YARN-2882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Konstantinos Karanasos
>Assignee: Arun Suresh
> Fix For: 2.9.0
>
> Attachments: YARN-2882-yarn-2877.001.patch, 
> YARN-2882-yarn-2877.002.patch, YARN-2882-yarn-2877.003.patch, 
> YARN-2882-yarn-2877.004.patch, YARN-2882.005.patch, yarn-2882.patch
>
>
> This JIRA introduces the notion of container types.
> We propose two initial types of containers: guaranteed-start and queueable 
> containers.
> Guaranteed-start are the existing containers, which are allocated by the 
> central RM and are instantaneously started, once allocated.
> Queueable is a new type of container, which allows containers to be queued in 
> the NM, thus their execution may be arbitrarily delayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5111) YARN container system metrics are not aggregated to application

2016-05-27 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304942#comment-15304942
 ] 

Li Lu commented on YARN-5111:
-

Thanks [~Naganarasimha]! I tried the patch and I can see from the posted 
metrics that "aggregationOp" is available. However I'm trying on a mac 
therefore there are NPEs when reporting physical memory usage and cpu usage. 
Can someone running with Linux machine quickly try it? Thanks! 

> YARN container system metrics are not aggregated to application
> ---
>
> Key: YARN-5111
> URL: https://issues.apache.org/jira/browse/YARN-5111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Naganarasimha G R
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5111-YARN-2928.v1.001.patch
>
>
> It appears that the container system metrics (CPU and memory) are not being 
> aggregated onto the application.
> I definitely see container system metrics when I query for YARN_CONTAINER. 
> However, there is no corresponding metrics on the parent application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5117:
-
Attachment: YARN-5117.002.patch

Adding the proper file.

> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5117.001.patch, YARN-5117.002.patch
>
>
> When NM Queuing is turned on, it looks like GUARANTEED containers do not 
> start even when there are no containers running on the NM. The following is 
> seen in the logs :
> {noformat}
> .
> 2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
> queuing.QueuingContainerManagerImpl 
> (QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
> There are no sufficient resources to start guaranteed 
> container_1463711648301_0001_01_01 even after attempting to kill any 
> running opportunistic containers.
> .
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304909#comment-15304909
 ] 

Hadoop QA commented on YARN-5105:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 10m 1s 
{color} | {color:red} Docker failed to build yetus/hadoop:cf2ee45. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806745/YARN-5105-YARN-2928.04.patch
 |
| JIRA Issue | YARN-5105 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11739/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5117:
-
Attachment: (was: YARN-5117.002.patch)

> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5117.001.patch
>
>
> When NM Queuing is turned on, it looks like GUARANTEED containers do not 
> start even when there are no containers running on the NM. The following is 
> seen in the logs :
> {noformat}
> .
> 2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
> queuing.QueuingContainerManagerImpl 
> (QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
> There are no sufficient resources to start guaranteed 
> container_1463711648301_0001_01_01 even after attempting to kill any 
> running opportunistic containers.
> .
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5117) QueuingContainerManager does not start GUARANTEED Container even if Resources are available

2016-05-27 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5117:
-
Attachment: YARN-5117.002.patch

Adding new version of the patch with the following changes:
- Fixed the way we are checking for available CPU.
- Fixed {{TestQueuingContainerManagerImpl}} given the above change.

> QueuingContainerManager does not start GUARANTEED Container even if Resources 
> are available
> ---
>
> Key: YARN-5117
> URL: https://issues.apache.org/jira/browse/YARN-5117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5117.001.patch, YARN-5117.002.patch
>
>
> When NM Queuing is turned on, it looks like GUARANTEED containers do not 
> start even when there are no containers running on the NM. The following is 
> seen in the logs :
> {noformat}
> .
> 2016-05-19 22:34:12,711 INFO  [IPC Server handler 0 on 49809] 
> queuing.QueuingContainerManagerImpl 
> (QueuingContainerManagerImpl.java:pickOpportunisticContainersToKill(351)) - 
> There are no sufficient resources to start guaranteed 
> container_1463711648301_0001_01_01 even after attempting to kill any 
> running opportunistic containers.
> .
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-27 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5105:
---
Attachment: (was: YARN-5105-YARN-2928.04.patch)

> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-27 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5105:
---
Attachment: YARN-5105-YARN-2928.04.patch

> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5176) More test cases for queuing of containers at the NM

2016-05-27 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-5176:


 Summary: More test cases for queuing of containers at the NM
 Key: YARN-5176
 URL: https://issues.apache.org/jira/browse/YARN-5176
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Konstantinos Karanasos
Assignee: Konstantinos Karanasos


Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5098) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token

2016-05-27 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5098:
--
Attachment: YARN-5098.1.patch

> Yarn Application log Aggreagation fails due to NM can not get correct HDFS 
> delegation token
> ---
>
> Key: YARN-5098
> URL: https://issues.apache.org/jira/browse/YARN-5098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-5098.1.patch, YARN-5098.1.patch
>
>
> Environment : HA cluster
> Yarn application logs for long running application could not be gathered 
> because Nodemanager failed to talk to HDFS with below error.
> {code}
> 2016-05-16 18:18:28,533 INFO  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:finishLogAggregation(555)) - Application just 
> finished : application_1463170334122_0002
> 2016-05-16 18:18:28,545 WARN  ipc.Client (Client.java:run(705)) - Exception 
> encountered while connecting to the server :
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 171 for hrt_qa) can't be found in cache
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:583)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:398)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:752)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:748)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1719)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:747)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:398)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1597)
> at org.apache.hadoop.ipc.Client.call(Client.java:1439)
> at org.apache.hadoop.ipc.Client.call(Client.java:1386)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:240)
> at com.sun.proxy.$Proxy83.getServerDefaults(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getServerDefaults(ClientNamenodeProtocolTranslatorPB.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy84.getServerDefaults(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:1018)
> at org.apache.hadoop.fs.Hdfs.getServerDefaults(Hdfs.java:156)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:550)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:687)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5098) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token

2016-05-27 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304889#comment-15304889
 ] 

Jian He commented on YARN-5098:
---

bq. Overall, I think we can simplify this code i
created YARN-5175
bq. Why this change?
The 'dttr' already prints the dttr.referringAppIds, so it duplicated

updated the patch

> Yarn Application log Aggreagation fails due to NM can not get correct HDFS 
> delegation token
> ---
>
> Key: YARN-5098
> URL: https://issues.apache.org/jira/browse/YARN-5098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-5098.1.patch
>
>
> Environment : HA cluster
> Yarn application logs for long running application could not be gathered 
> because Nodemanager failed to talk to HDFS with below error.
> {code}
> 2016-05-16 18:18:28,533 INFO  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:finishLogAggregation(555)) - Application just 
> finished : application_1463170334122_0002
> 2016-05-16 18:18:28,545 WARN  ipc.Client (Client.java:run(705)) - Exception 
> encountered while connecting to the server :
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 171 for hrt_qa) can't be found in cache
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:583)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:398)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:752)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:748)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1719)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:747)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:398)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1597)
> at org.apache.hadoop.ipc.Client.call(Client.java:1439)
> at org.apache.hadoop.ipc.Client.call(Client.java:1386)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:240)
> at com.sun.proxy.$Proxy83.getServerDefaults(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getServerDefaults(ClientNamenodeProtocolTranslatorPB.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy84.getServerDefaults(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:1018)
> at org.apache.hadoop.fs.Hdfs.getServerDefaults(Hdfs.java:156)
> at 
> org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:550)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:687)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5105) entire time series is returned for YARN container system metrics (CPU and memory)

2016-05-27 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5105:
---
Attachment: YARN-5105-YARN-2928.04.patch

> entire time series is returned for YARN container system metrics (CPU and 
> memory)
> -
>
> Key: YARN-5105
> URL: https://issues.apache.org/jira/browse/YARN-5105
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5105-YARN-2928.01.patch, 
> YARN-5105-YARN-2928.02.patch, YARN-5105-YARN-2928.03.patch, 
> YARN-5105-YARN-2928.04.patch
>
>
> I see that the entire time series of the CPU and memory metrics are returned 
> for the YARN containers REST query. This has a potential of bloating the 
> output big time.
> {noformat}
> "metrics": [
> {
> "type": "TIME_SERIES",
> "id": "MEMORY",
> "values": 
> {
> "1463518173363": ​407539712,
> "1463518170347": ​407539712,
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4837) User facing aspects of 'AM blacklisting' feature need fixing

2016-05-27 Thread Hadoop QA (JIRA)
} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt |
|   | hadoop.yarn.server.resourcemanager.scheduler.TestAppSchedulingInfo |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806717/YARN-4837-20160527.txt
 |
| JIRA Issue | YARN-4837 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 6940f217cc0e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5ea6fd8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11738/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/11738/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11738/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11738/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11738/testReport/ |
| modules | C:  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
  U: hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11738/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> User facing aspects of 'AM blacklisting' feature need fixing
> 
>
> Key: YARN-4837
> URL: https://issues.apache.org/jira/browse/YARN-4837
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: YARN-4837-20160515.txt, YARN-4837-20160520.1.txt, 
> YARN-4837-20160520.txt, YARN-4837-20160527.txt
>
>
> Was reviewing the user-facing aspects that we are releasing as part of 2.8.0.
> Looking at the 'AM blacklisting feature', I see several things to be fixed 
> before we release it in 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5165) NoOvercommitPolicy does not take advantage of RLE representation of plan

2016-05-27 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304854#comment-15304854
 ] 

Carlo Curino commented on YARN-5165:


The attached patch leverages the {{RLESparseResourceAllocation.merge()}} to 
subtract the proposed reservation skyline from the plan available resources, 
and testing for non-negative. In case of exception, we repackage it in a more 
meaningful exception type. This requires less code and it is substantially more 
efficient as it leverages the RLE encoded implementation of merge instead of a 
step-by-step implementation we used to have.

> NoOvercommitPolicy does not take advantage of RLE representation of plan
> 
>
> Key: YARN-5165
> URL: https://issues.apache.org/jira/browse/YARN-5165
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5165.1.patch, YARN-5165.2.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-27 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304849#comment-15304849
 ] 

Carlo Curino commented on YARN-5164:


Thinking more about this, it is non-trivial to RLE-ify this policy. The policy 
computes for every sliding window the integral of used resources and make sure 
it does not exceed an integral quota. The issue with this is that even for a 
constant value of the user reservation (or some of reservations) the computed 
integral over a moving window changes.
In the attached *YARN-5164-example.pdf* we visualize this for a simple 5-sample 
window. The orange line changes even for constant values of the blue line. 

We could in principle leverage higher-order representations (where we store 
RLE-encoded vectors for non-horizontal lines as starting-point + slope), but 
this is far from trivial, and we need to make sure the code complexity is 
justified. Postponing this for now. 

> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5164) CapacityOvertimePolicy does not take advantaged of plan RLE

2016-05-27 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5164:
---
Attachment: YARN-5164-example.pdf

> CapacityOvertimePolicy does not take advantaged of plan RLE
> ---
>
> Key: YARN-5164
> URL: https://issues.apache.org/jira/browse/YARN-5164
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5164-example.pdf
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5169) most YARN events have timestamp of -1

2016-05-27 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304844#comment-15304844
 ] 

Li Lu commented on YARN-5169:
-

Thanks [~sjlee0]! 

bq.  Are event timestamps for consumption by the timeline service exclusively? 
Or should we expect them to have reasonable timestamps more generally?
Right now timeline service may be the only user of this data, but providing 
this information would certainly be helpful to other components. 

bq. And also is the rationale of not generating the timestamp by default (being 
too expensive) still a valid conclusion?
This is also my concern. For NMs, my hunch is we have much less events 
generated, therefore performance problem should be less severe? 

> most YARN events have timestamp of -1
> -
>
> Key: YARN-5169
> URL: https://issues.apache.org/jira/browse/YARN-5169
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: Sangjin Lee
>
> Most of the YARN events (subclasses of {{AbstractEvent}}) have timestamp of 
> -1. {{AbstractEvent}} have two constructors, one that initializes the 
> timestamp to -1 and the other to the caller-provided value. But most events 
> use the former (thus timestamp of -1).
> Some of the more common events, including {{ApplicationEvent}}, 
> {{ContainerEvent}}, {{JobEvent}}, etc. do not set the timestamp.
> The rationale for this behavior seems to be mentioned in {{AbstractEvent}}:
> {code}
>   // use this if you DON'T care about the timestamp
>   public AbstractEvent(TYPE type) {
> this.type = type;
> // We're not generating a real timestamp here.  It's too expensive.
> timestamp = -1L;
>   }
> {code}
> This absence of the timestamp isn't really visible in many cases and 
> therefore may have gone unnoticed, but the timeline service exposes this 
> problem very visibly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5175) Simplify the delegation token renewal management for YARN localization and log-aggregation

2016-05-27 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5175:
--
Description: 
Increasingly, this DelegationTokenRenewer class for renewing expiring token for 
localization and log-aggregation is getting complicated. We could have done it 
at a per-user level.  copying comments from vinod in YARN-5098:
bq. Overall, I think we can simplify this code if we simply always manage our 
own tokens for localization and log-aggregation for long-running applications / 
services. Today, it's too complicated: for the first day, we use the user's 
token T, second day we get a new token T' but share it for all the apps 
originally sharing T, after RM restart we use a new token T'' which is 
different for each of the apps originally sharing T. We can simplify this by 
always managing it ourselves and managing them per-user!

  was:Increasingly, this DelegationTokenRenewer class for renewing expiring 
token for localization and log-aggregation is getting complicated. We could 
have done it at a per-user level.  copying comments from vinod in YARN-5098


> Simplify the delegation token renewal management for YARN localization and 
> log-aggregation
> --
>
> Key: YARN-5175
> URL: https://issues.apache.org/jira/browse/YARN-5175
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>
> Increasingly, this DelegationTokenRenewer class for renewing expiring token 
> for localization and log-aggregation is getting complicated. We could have 
> done it at a per-user level.  copying comments from vinod in YARN-5098:
> bq. Overall, I think we can simplify this code if we simply always manage our 
> own tokens for localization and log-aggregation for long-running applications 
> / services. Today, it's too complicated: for the first day, we use the user's 
> token T, second day we get a new token T' but share it for all the apps 
> originally sharing T, after RM restart we use a new token T'' which is 
> different for each of the apps originally sharing T. We can simplify this by 
> always managing it ourselves and managing them per-user!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5175) Simplify the delegation token renewal management for YARN localization and log-aggregation

2016-05-27 Thread Jian He (JIRA)
Jian He created YARN-5175:
-

 Summary: Simplify the delegation token renewal management for YARN 
localization and log-aggregation
 Key: YARN-5175
 URL: https://issues.apache.org/jira/browse/YARN-5175
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He


Increasingly, this DelegationTokenRenewer class for renewing expiring token for 
localization and log-aggregation is getting complicated. We could have done it 
at a per-user level.  copying comments from vinod in YARN-5098



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5127) Expose ExecutionType in Container api record

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304792#comment-15304792
 ] 

Hudson commented on YARN-5127:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9879 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9879/])
YARN-5127. Expose ExecutionType in Container api record. (Hitesh Sharma (arun 
suresh: rev aa975bc7811fc7c52b814ad9635bff8c2d34655b)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Container.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestDistributedSchedulingService.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/scheduler/OpportunisticContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerPBImpl.java


> Expose ExecutionType in Container api record
> 
>
> Key: YARN-5127
> URL: https://issues.apache.org/jira/browse/YARN-5127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Fix For: 2.9.0
>
> Attachments: YARN-5127.002.patch, YARN-5127.003.patch, 
> YARN-5127.004.patch, YARN-5127.005.patch, YARN-5127.v1.patch
>
>
> Currently the ExecutionType of the Container returned as a response to the 
> allocate call is contained in the {{ContinerTokenIdentifier}} which is 
> encoded into the ContainerToken.
> Unfortunately, the client would need to decode the returned token to access 
> the ContainerTokenIdentifier, which probably should not be allowed.
> This JIRA proposes to add a {{getExecutionType()}} method in the container 
> record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5165) NoOvercommitPolicy does not take advantage of RLE representation of plan

2016-05-27 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5165:
---
Attachment: YARN-5165.2.patch

> NoOvercommitPolicy does not take advantage of RLE representation of plan
> 
>
> Key: YARN-5165
> URL: https://issues.apache.org/jira/browse/YARN-5165
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5165.1.patch, YARN-5165.2.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5165) NoOvercommitPolicy does not take advantage of RLE representation of plan

2016-05-27 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5165:
---
Attachment: YARN-5165.1.patch

> NoOvercommitPolicy does not take advantage of RLE representation of plan
> 
>
> Key: YARN-5165
> URL: https://issues.apache.org/jira/browse/YARN-5165
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-5165.1.patch
>
>
> As a consequence small time granularities (e.g., 1 sec) and long time horizon 
> for a reservation (e.g., months) run rather slow (10 sec). 
> Proposed resolution is to switch to interval math in checking, similar to how 
> YARN-4359 does for agents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5127) Expose ExecutionType in Container api record

2016-05-27 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304768#comment-15304768
 ] 

Arun Suresh commented on YARN-5127:
---

+1, Thanks for the patch [~hrsharma]
Will fix the remaining check-style warnings when I commit shortly

> Expose ExecutionType in Container api record
> 
>
> Key: YARN-5127
> URL: https://issues.apache.org/jira/browse/YARN-5127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: YARN-5127.002.patch, YARN-5127.003.patch, 
> YARN-5127.004.patch, YARN-5127.005.patch, YARN-5127.v1.patch
>
>
> Currently the ExecutionType of the Container returned as a response to the 
> allocate call is contained in the {{ContinerTokenIdentifier}} which is 
> encoded into the ContainerToken.
> Unfortunately, the client would need to decode the returned token to access 
> the ContainerTokenIdentifier, which probably should not be allowed.
> This JIRA proposes to add a {{getExecutionType()}} method in the container 
> record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5127) Expose ExecutionType in Container api record

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304716#comment-15304716
 ] 

Hadoop QA commented on YARN-5127:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 53s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 32s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 7 new + 
40 unchanged - 11 fixed = 47 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
21s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 7s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
5 new + 5423 unchanged - 0 fixed = 5428 total (was 5423) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 10s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 44s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 21s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| 

[jira] [Updated] (YARN-4837) User facing aspects of 'AM blacklisting' feature need fixing

2016-05-27 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-4837:
--
Attachment: YARN-4837-20160527.txt

Updated patch fixing the conflicts.

Can't fix the checkstyle warning - Method length is 174 lines, and the 
test-failures are unrelated.

> User facing aspects of 'AM blacklisting' feature need fixing
> 
>
> Key: YARN-4837
> URL: https://issues.apache.org/jira/browse/YARN-4837
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: YARN-4837-20160515.txt, YARN-4837-20160520.1.txt, 
> YARN-4837-20160520.txt, YARN-4837-20160527.txt
>
>
> Was reviewing the user-facing aspects that we are releasing as part of 2.8.0.
> Looking at the 'AM blacklisting feature', I see several things to be fixed 
> before we release it in 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5127) Expose ExecutionType in Container api record

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304693#comment-15304693
 ] 

Hadoop QA commented on YARN-5127:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 7 new + 
39 unchanged - 11 fixed = 46 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
20s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 3s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 
5 new + 5423 unchanged - 0 fixed = 5428 total (was 5423) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 4s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 7s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 59s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.security.TestClientToAMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| 

[jira] [Comment Edited] (YARN-5169) most YARN events have timestamp of -1

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304689#comment-15304689
 ] 

Sangjin Lee edited comment on YARN-5169 at 5/27/16 8:11 PM:


Thanks for your comments [~gtCarrera9]! I agree that using 
{{SystemMetricEvent}} and others to populate the timestamp is one approach to 
this. I have a bit broader question for the event timestamps, however. Are 
event timestamps for consumption by the timeline service exclusively? Or should 
we expect them to have reasonable timestamps more generally?

And also is the rationale of not generating the timestamp by default (being too 
expensive) still a valid conclusion?


was (Author: sjlee0):
Thanks for your comments [~gtCarrera9]! I agree that using 
{{SystemMetricEvent}} and others to populate the timestamp is one approach to 
this. I have a bit broader question for the event timestamps, however. Are 
event timestamps for consumption by the timeline service exclusively? Or should 
we expect them to have reasonable timestamps more generally?

> most YARN events have timestamp of -1
> -
>
> Key: YARN-5169
> URL: https://issues.apache.org/jira/browse/YARN-5169
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: Sangjin Lee
>
> Most of the YARN events (subclasses of {{AbstractEvent}}) have timestamp of 
> -1. {{AbstractEvent}} have two constructors, one that initializes the 
> timestamp to -1 and the other to the caller-provided value. But most events 
> use the former (thus timestamp of -1).
> Some of the more common events, including {{ApplicationEvent}}, 
> {{ContainerEvent}}, {{JobEvent}}, etc. do not set the timestamp.
> The rationale for this behavior seems to be mentioned in {{AbstractEvent}}:
> {code}
>   // use this if you DON'T care about the timestamp
>   public AbstractEvent(TYPE type) {
> this.type = type;
> // We're not generating a real timestamp here.  It's too expensive.
> timestamp = -1L;
>   }
> {code}
> This absence of the timestamp isn't really visible in many cases and 
> therefore may have gone unnoticed, but the timeline service exposes this 
> problem very visibly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5169) most YARN events have timestamp of -1

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304689#comment-15304689
 ] 

Sangjin Lee commented on YARN-5169:
---

Thanks for your comments [~gtCarrera9]! I agree that using 
{{SystemMetricEvent}} and others to populate the timestamp is one approach to 
this. I have a bit broader question for the event timestamps, however. Are 
event timestamps for consumption by the timeline service exclusively? Or should 
we expect them to have reasonable timestamps more generally?

> most YARN events have timestamp of -1
> -
>
> Key: YARN-5169
> URL: https://issues.apache.org/jira/browse/YARN-5169
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: Sangjin Lee
>
> Most of the YARN events (subclasses of {{AbstractEvent}}) have timestamp of 
> -1. {{AbstractEvent}} have two constructors, one that initializes the 
> timestamp to -1 and the other to the caller-provided value. But most events 
> use the former (thus timestamp of -1).
> Some of the more common events, including {{ApplicationEvent}}, 
> {{ContainerEvent}}, {{JobEvent}}, etc. do not set the timestamp.
> The rationale for this behavior seems to be mentioned in {{AbstractEvent}}:
> {code}
>   // use this if you DON'T care about the timestamp
>   public AbstractEvent(TYPE type) {
> this.type = type;
> // We're not generating a real timestamp here.  It's too expensive.
> timestamp = -1L;
>   }
> {code}
> This absence of the timestamp isn't really visible in many cases and 
> therefore may have gone unnoticed, but the timeline service exposes this 
> problem very visibly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5098) Yarn Application log Aggreagation fails due to NM can not get correct HDFS delegation token

2016-05-27 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304685#comment-15304685
 ] 

Vinod Kumar Vavilapalli commented on YARN-5098:
---

Fix checkstyle and unit test issues.

The patch looks good overall, few comments
 - Overall, I think we can simplify this code if we simply always manage our 
own tokens for localization and log-aggregation for long-running applications / 
services. Today, it's too complicated: for the first day, we use the user's 
token T, second day we get a new token T' but share it for all the apps 
originally sharing T, after RM restart we use a new token T'' which is 
different for each of the apps originally sharing T. We can simplify this by 
always managing it ourselves and managing them per-user!
 - There are a few unused imports.
 - Unrelated to the patch, but let's rename requestNewHdfsDelegationToken() -> 
requestNewHdfsDelegationTokenAsProxyUser()
 - Why this change?
{code}
-LOG.info("Renewed delegation-token= [" + dttr + "], for "
-+ dttr.referringAppIds);
+LOG.info("Renewed delegation-token= [" + dttr + "]");
{code}
 - Testcase
-- Should use the same user for both tokens?
-- Add a comment saying the rm2 is simulating RM restart
-- Can we rewrite the following, it is a little confusing
{code}
if (dttr.token.equals(expectedToken)) {
  secondRenewInvoked = true;
  super.renewToken(dttr);
} else {
  firstRenewInvoked = true;
  throw new InvalidToken("Failed to renew");
}
{code}
to
{code}
if (dttr.token.equals(updatedtoken)) {
  super.renewToken(dttr);
} else if (dttr.token.equals(originalToken) {
  throw new InvalidToken("Failed to renew");
} else {
  throw new IOException("Unexpected");
}
{code}
and assert that firstRenewInvoked and secondRenewInvoked are set?

> Yarn Application log Aggreagation fails due to NM can not get correct HDFS 
> delegation token
> ---
>
> Key: YARN-5098
> URL: https://issues.apache.org/jira/browse/YARN-5098
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yesha Vora
>Assignee: Jian He
> Attachments: YARN-5098.1.patch
>
>
> Environment : HA cluster
> Yarn application logs for long running application could not be gathered 
> because Nodemanager failed to talk to HDFS with below error.
> {code}
> 2016-05-16 18:18:28,533 INFO  logaggregation.AppLogAggregatorImpl 
> (AppLogAggregatorImpl.java:finishLogAggregation(555)) - Application just 
> finished : application_1463170334122_0002
> 2016-05-16 18:18:28,545 WARN  ipc.Client (Client.java:run(705)) - Exception 
> encountered while connecting to the server :
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  token (HDFS_DELEGATION_TOKEN token 171 for hrt_qa) can't be found in cache
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:583)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:398)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:752)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:748)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1719)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:747)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3100(Client.java:398)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1597)
> at org.apache.hadoop.ipc.Client.call(Client.java:1439)
> at org.apache.hadoop.ipc.Client.call(Client.java:1386)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:240)
> at com.sun.proxy.$Proxy83.getServerDefaults(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getServerDefaults(ClientNamenodeProtocolTranslatorPB.java:282)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)

[jira] [Commented] (YARN-4958) The file localization process should allow for wildcards to reduce the application footprint in the state store

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304675#comment-15304675
 ] 

Sangjin Lee commented on YARN-4958:
---

Sorry [~templedf], haven't had cycles to get back to this one. I'll definitely 
review the latest patch next week.

Just to get data from you, I assume that you tested for cases such as non-jar 
entries in libjars, etc. Could you kindly confirm?

> The file localization process should allow for wildcards to reduce the 
> application footprint in the state store
> ---
>
> Key: YARN-4958
> URL: https://issues.apache.org/jira/browse/YARN-4958
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-4958.001.patch, YARN-4958.002.patch, 
> YARN-4958.003.patch
>
>
> When using the -libjars option to add classes to the classpath, every library 
> so added is explicitly listed in the {{ContainerLaunchContext}}'s local 
> resources even though they're all uploaded to the same directory in HDFS.  
> When using tools like Crunch without an uber JAR or when trying to take 
> advantage of the shared cache, the number of libraries can be quite large.  
> We've seen many cases where we had to turn down the max number of 
> applications to prevent ZK from running out of heap because of the size of 
> the state store entries.
> Rather than listing all files independently, this JIRA proposes to have the 
> NM allow wildcards in the resource localization paths.  Specifically, we 
> propose to allow a path to have a final component (name) set to "*", which is 
> interpreted by the NM as "download the full directory and link to every file 
> in it from the job's working directory."  This behavior is the same as the 
> current behavior when using -libjars, but avoids explicitly listing every 
> file.
> This JIRA does not attempt to provide more general purpose wildcards, such as 
> "\*.jar" or "file\*", as having multiple entries for a single directory 
> presents numerous logistical issues.
> This JIRA also does not attempt to integrate with the shared cache.  That 
> work will be left to a future JIRA.  Specifically, this JIRA only applies 
> when a full directory is uploaded.  Currently the shared cache does not 
> handle directory uploads.
> This JIRA proposes to allow for wildcards both in the internal processing of 
> the -libjars switch and in paths added through the {{Job}} and 
> {{DistributedCache}} classes.
> The proposed approach is to treat a path, "dir/\*", as "dir" for purposes of 
> all file verification and localization.  In the final step, the NM will query 
> the localized directory to get a list of the files in "dir" such that each 
> can be linked from the job's working directory.  Since $PWD/\* is always 
> included on the classpath, all JAR files in "dir" will be in the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5109) timestamps are stored unencoded causing parse errors

2016-05-27 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304659#comment-15304659
 ] 

Varun Saxena commented on YARN-5109:


Thanks [~sjlee0] and [~jrottinghuis] for the review and commit. And for 
suggesting the approach taken in the JIRA.

> timestamps are stored unencoded causing parse errors
> 
>
> Key: YARN-5109
> URL: https://issues.apache.org/jira/browse/YARN-5109
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: yarn-2928-1st-milestone
> Fix For: YARN-2928
>
> Attachments: YARN-5109-YARN-2928.003.patch, 
> YARN-5109-YARN-2928.01.patch, YARN-5109-YARN-2928.02.patch, 
> YARN-5109-YARN-2928.03.patch, YARN-5109-YARN-2928.04.patch, 
> YARN-5109-YARN-2928.05.patch, YARN-5109-YARN-2928.06.patch, 
> YARN-5109-YARN-2928.07.patch, YARN-5109-YARN-2928.08.patch
>
>
> When we store timestamps (for example as part of the row key or part of the 
> column name for an event), the bytes are used as is without any encoding. If 
> the byte value happens to contain a separator character we use (e.g. "!" or 
> "="), it causes a parse failure when we read it.
> I came across this while looking into this error in the timeline reader:
> {noformat}
> 2016-05-17 21:28:38,643 WARN 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.TimelineStorageUtils:
>  incorrectly formatted column name: it will be discarded
> {noformat}
> I traced the data that was causing this, and the column name (for the event) 
> was the following:
> {noformat}
> i:e!YARN_RM_CONTAINER_CREATED=\x7F\xFF\xFE\xABDY=\x99=YARN_CONTAINER_ALLOCATED_HOST
> {noformat}
> Note that the column name is supposed to be of the format (event 
> id)=(timestamp)=(event info key). However, observe the timestamp portion:
> {noformat}
> \x7F\xFF\xFE\xABDY=\x99
> {noformat}
> The presence of the separator ("=") causes the parse error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-05-27 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304653#comment-15304653
 ] 

Joep Rottinghuis commented on YARN-5170:


General concern with singletons that they start out innocent w/o state. Then 
state creeps in and subtle bug happen.

Main motivation around using singletons for the converters was
a) LongConverter already was singleton, so we kept it consistent
b) Only once instance had to be around and could be re-used

a) will be tackled by refactoring both in this jira
b) Aside from single object creation being cheap (unless we do it so often we 
can show it isn't), it turns out that refactoring the static access ends up 
with cleaner code in this case. We had cases where we asked a static method to 
encode, which would then pull a singleton reference to a converter and create 
an instance of the key to pass to it. Some of our other code evolved into 
similar patterns.
It turns out that we now keep the converters in instance variables and/or 
create them once per call and re-use them still.

We actually re-created rowkeys several times in the call hierarchy of 
HBaseTimelineWriterImpl. I'm in the middle of addressing that now.

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
> Attachments: YARN-5170-YARN-2928.01.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5167) Escaping occurences of encodedValues

2016-05-27 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304596#comment-15304596
 ] 

Varun Saxena edited comment on YARN-5167 at 5/27/16 7:16 PM:
-

This we will target for 1st milestone / drop on trunk ? Personally I think we 
can.


was (Author: varun_saxena):
This we will target for 1st milestone ? Personally I think we can.

> Escaping occurences of encodedValues
> 
>
> Key: YARN-5167
> URL: https://issues.apache.org/jira/browse/YARN-5167
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Joep Rottinghuis
>Assignee: Sangjin Lee
>Priority: Critical
>
> We had earlier decided to punt on this, but in discussing YARN-5109 we 
> thought it would be best to just be safe rather than sorry later on.
> Encoded sequences can occur in the original string, especially in case of 
> "foreign key" if we decide to have lookups.
> For example, space is encoded as %2$.
> Encoding "String with %2$ in it" would decode to "String with   in it".
> We though we should first escape existing occurrences of encoded strings by 
> prefixing a backslash (even if there is already a backslash that should be 
> ok). Then we should replace all unencoded strings.
> On the way out, we should replace all occurrences of our encoded string to 
> the original except when it is prefixed by an escape character. Lastly we 
> should strip off the one additional backslash in front of each remaining 
> (escaped) sequence.
> If we add the following entry to TestSeparator#testEncodeDecode() that 
> demonstrates what this jira should accomplish:
> {code}
> testEncodeDecode("Double-escape %2$ and %3$ or \\%2$ or \\%3$, nor  
> %2$ = no problem!", Separator.QUALIFIERS,
> Separator.VALUES, Separator.SPACE, Separator.TAB);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5167) Escaping occurences of encodedValues

2016-05-27 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304596#comment-15304596
 ] 

Varun Saxena commented on YARN-5167:


This we will target for 1st milestone ? Personally I think we can.

> Escaping occurences of encodedValues
> 
>
> Key: YARN-5167
> URL: https://issues.apache.org/jira/browse/YARN-5167
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Joep Rottinghuis
>Assignee: Sangjin Lee
>Priority: Critical
>
> We had earlier decided to punt on this, but in discussing YARN-5109 we 
> thought it would be best to just be safe rather than sorry later on.
> Encoded sequences can occur in the original string, especially in case of 
> "foreign key" if we decide to have lookups.
> For example, space is encoded as %2$.
> Encoding "String with %2$ in it" would decode to "String with   in it".
> We though we should first escape existing occurrences of encoded strings by 
> prefixing a backslash (even if there is already a backslash that should be 
> ok). Then we should replace all unencoded strings.
> On the way out, we should replace all occurrences of our encoded string to 
> the original except when it is prefixed by an escape character. Lastly we 
> should strip off the one additional backslash in front of each remaining 
> (escaped) sequence.
> If we add the following entry to TestSeparator#testEncodeDecode() that 
> demonstrates what this jira should accomplish:
> {code}
> testEncodeDecode("Double-escape %2$ and %3$ or \\%2$ or \\%3$, nor  
> %2$ = no problem!", Separator.QUALIFIERS,
> Separator.VALUES, Separator.SPACE, Separator.TAB);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5170) Eliminate singleton converters and static method access

2016-05-27 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304588#comment-15304588
 ] 

Varun Saxena edited comment on YARN-5170 at 5/27/16 7:10 PM:
-

Removal of static methods is a good idea. Especially the methods reading 
events, metrics, etc. which just did not belong to TimelineStorageUtils.

Regarding using instance variables for singleton converters, wanted to 
understand the intention. 
Is it because, in general, singletons make it hard to make changes in future 
and they are not extensible ? Or are there some specific concerns ?


was (Author: varun_saxena):
Removal of static methods is a good idea. Especially the methods reading 
events, metrics, etc. which just did not belong to TimelineStorageUtils.

Regarding using instance variables for singleton converters, wanted to 
understand the intention. 
Is it because in general singletons make it hard to make changes in future and 
they are not extensible ? Or are there some specific concerns ?

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
> Attachments: YARN-5170-YARN-2928.01.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-05-27 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304588#comment-15304588
 ] 

Varun Saxena commented on YARN-5170:


Removal of static methods is a good idea. Especially the methods reading 
events, metrics, etc. which just did not belong to TimelineStorageUtils.

Regarding using instance variables for singleton converters, wanted to 
understand the intention. 
Is it because in general singletons make it hard to make changes in future and 
they are not extensible ? Or are there some specific concerns ?

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
> Attachments: YARN-5170-YARN-2928.01.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5127) Expose ExecutionType in Container api record

2016-05-27 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh resolved YARN-5127.
---
Resolution: Fixed

> Expose ExecutionType in Container api record
> 
>
> Key: YARN-5127
> URL: https://issues.apache.org/jira/browse/YARN-5127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: YARN-5127.002.patch, YARN-5127.003.patch, 
> YARN-5127.004.patch, YARN-5127.005.patch, YARN-5127.v1.patch
>
>
> Currently the ExecutionType of the Container returned as a response to the 
> allocate call is contained in the {{ContinerTokenIdentifier}} which is 
> encoded into the ContainerToken.
> Unfortunately, the client would need to decode the returned token to access 
> the ContainerTokenIdentifier, which probably should not be allowed.
> This JIRA proposes to add a {{getExecutionType()}} method in the container 
> record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5169) most YARN events have timestamp of -1

2016-05-27 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304555#comment-15304555
 ] 

Li Lu edited comment on YARN-5169 at 5/27/16 6:51 PM:
--

Thanks [~sjlee0] for raising this issue! I noticed that in RM's 
SystemMetricsPublisher code, we're creating {{SystemMetricEvent}} s which will 
always have timestamps. Those timestamps are typically generated by the 
call-sites of each system metrics publisher call. For example, in 
{{appACLsUpdated}} call, the update time is generated in 
RMAppManager#createAndPopulateNewRMApp. Similarly, appFinished method of SMP is 
called in RMAppImpl, when we perform the final state transition to final state. 
We also generate the finish time at that time. Maybe we'd like to provide a 
similar mechanism in the NM? That is to say, we can have NMMetricsEvent exposed 
in NM's source code, and create those events when we publish the data (instead 
of using timestamps of existing events)? I think we may put part of this work 
in trunk, but one concern is the newly introduced NMMetricsEvent will not be 
used in trunk but just in YARN-2928 branch... 


was (Author: gtcarrera9):
Thanks [~sjlee0] for raising this issue! I noticed that in RM's 
SystemMetricsPublisher code, we're creating {{SystemMetricEvent}}s which will 
always have timestamps. Those timestamps are typically generated by the 
call-sites of each system metrics publisher call. For example, in 
{{appACLsUpdated}} call, the update time is generated in 
RMAppManager#createAndPopulateNewRMApp. Similarly, appFinished method of SMP is 
called in RMAppImpl, when we perform the final state transition to final state. 
We also generate the finish time at that time. Maybe we'd like to provide a 
similar mechanism in the NM? That is to say, we can have NMMetricsEvent exposed 
in NM's source code, and create those events when we publish the data (instead 
of using timestamps of existing events)? I think we may put part of this work 
in trunk, but one concern is the newly introduced NMMetricsEvent will not be 
used in trunk but just in YARN-2928 branch... 

> most YARN events have timestamp of -1
> -
>
> Key: YARN-5169
> URL: https://issues.apache.org/jira/browse/YARN-5169
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: Sangjin Lee
>
> Most of the YARN events (subclasses of {{AbstractEvent}}) have timestamp of 
> -1. {{AbstractEvent}} have two constructors, one that initializes the 
> timestamp to -1 and the other to the caller-provided value. But most events 
> use the former (thus timestamp of -1).
> Some of the more common events, including {{ApplicationEvent}}, 
> {{ContainerEvent}}, {{JobEvent}}, etc. do not set the timestamp.
> The rationale for this behavior seems to be mentioned in {{AbstractEvent}}:
> {code}
>   // use this if you DON'T care about the timestamp
>   public AbstractEvent(TYPE type) {
> this.type = type;
> // We're not generating a real timestamp here.  It's too expensive.
> timestamp = -1L;
>   }
> {code}
> This absence of the timestamp isn't really visible in many cases and 
> therefore may have gone unnoticed, but the timeline service exposes this 
> problem very visibly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5127) Expose ExecutionType in Container api record

2016-05-27 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reopened YARN-5127:
---

> Expose ExecutionType in Container api record
> 
>
> Key: YARN-5127
> URL: https://issues.apache.org/jira/browse/YARN-5127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: YARN-5127.002.patch, YARN-5127.003.patch, 
> YARN-5127.004.patch, YARN-5127.005.patch, YARN-5127.v1.patch
>
>
> Currently the ExecutionType of the Container returned as a response to the 
> allocate call is contained in the {{ContinerTokenIdentifier}} which is 
> encoded into the ContainerToken.
> Unfortunately, the client would need to decode the returned token to access 
> the ContainerTokenIdentifier, which probably should not be allowed.
> This JIRA proposes to add a {{getExecutionType()}} method in the container 
> record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5169) most YARN events have timestamp of -1

2016-05-27 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304555#comment-15304555
 ] 

Li Lu commented on YARN-5169:
-

Thanks [~sjlee0] for raising this issue! I noticed that in RM's 
SystemMetricsPublisher code, we're creating {{SystemMetricEvent}}s which will 
always have timestamps. Those timestamps are typically generated by the 
call-sites of each system metrics publisher call. For example, in 
{{appACLsUpdated}} call, the update time is generated in 
RMAppManager#createAndPopulateNewRMApp. Similarly, appFinished method of SMP is 
called in RMAppImpl, when we perform the final state transition to final state. 
We also generate the finish time at that time. Maybe we'd like to provide a 
similar mechanism in the NM? That is to say, we can have NMMetricsEvent exposed 
in NM's source code, and create those events when we publish the data (instead 
of using timestamps of existing events)? I think we may put part of this work 
in trunk, but one concern is the newly introduced NMMetricsEvent will not be 
used in trunk but just in YARN-2928 branch... 

> most YARN events have timestamp of -1
> -
>
> Key: YARN-5169
> URL: https://issues.apache.org/jira/browse/YARN-5169
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: Sangjin Lee
>
> Most of the YARN events (subclasses of {{AbstractEvent}}) have timestamp of 
> -1. {{AbstractEvent}} have two constructors, one that initializes the 
> timestamp to -1 and the other to the caller-provided value. But most events 
> use the former (thus timestamp of -1).
> Some of the more common events, including {{ApplicationEvent}}, 
> {{ContainerEvent}}, {{JobEvent}}, etc. do not set the timestamp.
> The rationale for this behavior seems to be mentioned in {{AbstractEvent}}:
> {code}
>   // use this if you DON'T care about the timestamp
>   public AbstractEvent(TYPE type) {
> this.type = type;
> // We're not generating a real timestamp here.  It's too expensive.
> timestamp = -1L;
>   }
> {code}
> This absence of the timestamp isn't really visible in many cases and 
> therefore may have gone unnoticed, but the timeline service exposes this 
> problem very visibly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5088) Improve "yarn log" command-line to read the last K bytes for the log files

2016-05-27 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304548#comment-15304548
 ] 

Xuan Gong commented on YARN-5088:
-

Thanks for the view. Attached a new patch to address the latest comments

> Improve "yarn log" command-line to read the last K bytes for the log files
> --
>
> Key: YARN-5088
> URL: https://issues.apache.org/jira/browse/YARN-5088
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5088.1.patch, YARN-5088.2.patch, YARN-5088.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5088) Improve "yarn log" command-line to read the last K bytes for the log files

2016-05-27 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5088:

Attachment: YARN-5088.3.patch

> Improve "yarn log" command-line to read the last K bytes for the log files
> --
>
> Key: YARN-5088
> URL: https://issues.apache.org/jira/browse/YARN-5088
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5088.1.patch, YARN-5088.2.patch, YARN-5088.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5174) add documentation on needing to add hbase-site.xml on YARN cluster

2016-05-27 Thread Sangjin Lee (JIRA)
Sangjin Lee created YARN-5174:
-

 Summary: add documentation on needing to add hbase-site.xml on 
YARN cluster
 Key: YARN-5174
 URL: https://issues.apache.org/jira/browse/YARN-5174
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Affects Versions: YARN-2928
Reporter: Sangjin Lee
Assignee: Sangjin Lee


One part that is missing in the documentation is the need to add 
{{hbase-site.xml}} on the client side (the client hadoop cluster). First, we 
need to arrive at the minimally required client setting to connect to the right 
hbase cluster. Then, we need to document it so that users know exactly what to 
do to configure the cluster to use the timeline service v.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node

2016-05-27 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5171:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-4742

> Extend DistributedSchedulerProtocol to notify RM of containers allocated by 
> the Node
> 
>
> Key: YARN-5171
> URL: https://issues.apache.org/jira/browse/YARN-5171
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Inigo Goiri
>
> Currently, the RM does not know about Containers allocated by the 
> OpportunisticContainerAllocator on the NM. This JIRA proposes to extend the 
> Distributed Scheduler request interceptor and the protocol to notify the RM 
> of new containers as and when they are allocated at the NM. The 
> {{RMContainer}} should also be extended to expose the {{ExecutionType}} of 
> the container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node

2016-05-27 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304421#comment-15304421
 ] 

Konstantinos Karanasos commented on YARN-5171:
--

Thanks for starting this [~elgoiri] and [~asuresh].
Should we change the distributed scheduler request interceptor or shall we 
rather change the way we report containers in the NM-RM heartbeat (at the 
moment the running OPPORTUNISTIC are ignored)?

> Extend DistributedSchedulerProtocol to notify RM of containers allocated by 
> the Node
> 
>
> Key: YARN-5171
> URL: https://issues.apache.org/jira/browse/YARN-5171
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Inigo Goiri
>
> Currently, the RM does not know about Containers allocated by the 
> OpportunisticContainerAllocator on the NM. This JIRA proposes to extend the 
> Distributed Scheduler request interceptor and the protocol to notify the RM 
> of new containers as and when they are allocated at the NM. The 
> {{RMContainer}} should also be extended to expose the {{ExecutionType}} of 
> the container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5141) Get Container logs for the Running application from Yarn Logs CommandLine

2016-05-27 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304390#comment-15304390
 ] 

Xuan Gong commented on YARN-5141:
-

Thanks for the review. Attached a new patch to fix the checkstyle issue.

> Get Container logs for the Running application from Yarn Logs CommandLine
> -
>
> Key: YARN-5141
> URL: https://issues.apache.org/jira/browse/YARN-5141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5141.1.patch, YARN-5141.2.patch
>
>
> Currently, we can only get container logs for Finished applications



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5173) Enable Token Support for AHSClient

2016-05-27 Thread Kuhu Shukla (JIRA)
Kuhu Shukla created YARN-5173:
-

 Summary: Enable Token Support for AHSClient
 Key: YARN-5173
 URL: https://issues.apache.org/jira/browse/YARN-5173
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 2.7.2
Reporter: Kuhu Shukla
Assignee: Kuhu Shukla


In a scenario where the YarnClient can't find an application during the 
getApplicationReport method call, it falls over to the AHS method(s) via RPC, 
which throws an AccessControlException as token support for AHS is disabled.
{code}
java.io.IOException: Failed on local exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TYPE_OF_AUTH]; Host Details : local host is: "1.2.3.4"; destination host 
is: "ahs-address:ahs-port;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5141) Get Container logs for the Running application from Yarn Logs CommandLine

2016-05-27 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5141:

Attachment: YARN-5141.2.patch

> Get Container logs for the Running application from Yarn Logs CommandLine
> -
>
> Key: YARN-5141
> URL: https://issues.apache.org/jira/browse/YARN-5141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5141.1.patch, YARN-5141.2.patch
>
>
> Currently, we can only get container logs for Finished applications



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5172) Update yarn daemonlog documentation due to HADOOP-12847

2016-05-27 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created YARN-5172:
-

 Summary: Update yarn daemonlog documentation due to HADOOP-12847
 Key: YARN-5172
 URL: https://issues.apache.org/jira/browse/YARN-5172
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Trivial


In HADOOP-12847, I updated the hadoop command manual, but I did not notice yarn 
has the same command that should also be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5169) most YARN events have timestamp of -1

2016-05-27 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304349#comment-15304349
 ] 

Sangjin Lee commented on YARN-5169:
---

I'd also point out there is already some code in the timeline service (e.g. 
{{SystemMetricsPublisher}}) that "fills in this timestamp" with the current 
timestamp for some events when writing to storage. However, strictly speaking 
this is the time when the data is written to the timeline service storage, and 
is not necessarily the same thing as the actual event timestamp.

> most YARN events have timestamp of -1
> -
>
> Key: YARN-5169
> URL: https://issues.apache.org/jira/browse/YARN-5169
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 2.7.2
>Reporter: Sangjin Lee
>
> Most of the YARN events (subclasses of {{AbstractEvent}}) have timestamp of 
> -1. {{AbstractEvent}} have two constructors, one that initializes the 
> timestamp to -1 and the other to the caller-provided value. But most events 
> use the former (thus timestamp of -1).
> Some of the more common events, including {{ApplicationEvent}}, 
> {{ContainerEvent}}, {{JobEvent}}, etc. do not set the timestamp.
> The rationale for this behavior seems to be mentioned in {{AbstractEvent}}:
> {code}
>   // use this if you DON'T care about the timestamp
>   public AbstractEvent(TYPE type) {
> this.type = type;
> // We're not generating a real timestamp here.  It's too expensive.
> timestamp = -1L;
>   }
> {code}
> This absence of the timestamp isn't really visible in many cases and 
> therefore may have gone unnoticed, but the timeline service exposes this 
> problem very visibly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5127) Expose ExecutionType in Container api record

2016-05-27 Thread Hitesh Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Sharma updated YARN-5127:

Attachment: YARN-5127.005.patch

> Expose ExecutionType in Container api record
> 
>
> Key: YARN-5127
> URL: https://issues.apache.org/jira/browse/YARN-5127
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Hitesh Sharma
> Attachments: YARN-5127.002.patch, YARN-5127.003.patch, 
> YARN-5127.004.patch, YARN-5127.005.patch, YARN-5127.v1.patch
>
>
> Currently the ExecutionType of the Container returned as a response to the 
> allocate call is contained in the {{ContinerTokenIdentifier}} which is 
> encoded into the ContainerToken.
> Unfortunately, the client would need to decode the returned token to access 
> the ContainerTokenIdentifier, which probably should not be allowed.
> This JIRA proposes to add a {{getExecutionType()}} method in the container 
> record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node

2016-05-27 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5171:
--
Assignee: Inigo Goiri

> Extend DistributedSchedulerProtocol to notify RM of containers allocated by 
> the Node
> 
>
> Key: YARN-5171
> URL: https://issues.apache.org/jira/browse/YARN-5171
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Inigo Goiri
>
> Currently, the RM does not know about Containers allocated by the 
> OpportunisticContainerAllocator on the NM. This JIRA proposes to extend the 
> Distributed Scheduler request interceptor and the protocol to notify the RM 
> of new containers as and when they are allocated at the NM. The 
> {{RMContainer}} should also be extended to expose the {{ExecutionType}} of 
> the container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5171) Extend DistributedSchedulerProtocol to notify RM of containers allocated by the Node

2016-05-27 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5171:
-

 Summary: Extend DistributedSchedulerProtocol to notify RM of 
containers allocated by the Node
 Key: YARN-5171
 URL: https://issues.apache.org/jira/browse/YARN-5171
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun Suresh


Currently, the RM does not know about Containers allocated by the 
OpportunisticContainerAllocator on the NM. This JIRA proposes to extend the 
Distributed Scheduler request interceptor and the protocol to notify the RM of 
new containers as and when they are allocated at the NM. The {{RMContainer}} 
should also be extended to expose the {{ExecutionType}} of the container.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4844) Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304215#comment-15304215
 ] 

Hadoop QA commented on YARN-4844:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 65 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 21s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
3s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 31s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 28s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
35s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 19s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
48s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 
39s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-client in branch-2 failed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 0s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 32s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 903 
unchanged - 0 fixed = 906 total (was 903) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 37s 
{color} | {color:red} root: patch generated 207 new + 5369 unchanged - 170 
fixed = 5576 total (was 5539) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
42s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
59s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-client in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 8s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | 

[jira] [Updated] (YARN-5007) MiniYarnCluster contains deprecated constructor which is called by the other constructors

2016-05-27 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated YARN-5007:
---
Attachment: (was: YARN-5007.01.patch)

> MiniYarnCluster contains deprecated constructor which is called by the other 
> constructors
> -
>
> Key: YARN-5007
> URL: https://issues.apache.org/jira/browse/YARN-5007
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: timelineserver
>Affects Versions: 2.6.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 2.8.0
>
>
> MiniYarnCluster has a deprecated constructor which is called by the other 
> constructors and it causes javac warnings during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5166) javadoc:javadoc goal fails on hadoop-yarn-client

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304128#comment-15304128
 ] 

Hudson commented on YARN-5166:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9877 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9877/])
YARN-5166. javadoc:javadoc goal fails on hadoop-yarn-client. Contributed 
(aajisaka: rev e4022debf717083ab9192164af9978500035d1be)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/YarnClient.java


> javadoc:javadoc goal fails on hadoop-yarn-client
> 
>
> Key: YARN-5166
> URL: https://issues.apache.org/jira/browse/YARN-5166
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Fix For: 2.8.0
>
> Attachments: YARN-5166.01.patch
>
>
> {code}mvn clean javadoc:javadoc -DskipTests{code}
> {code}
> 2 errors
> 180 warnings
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 5.758 s
> [INFO] Finished at: 2016-05-27T00:21:42+02:00
> [INFO] Final Memory: 29M/391M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:javadoc (default-cli) on 
> project hadoop-yarn-client: An error has occurred in JavaDocs report 
> generation:
> [ERROR] Exit code: 1 - 
> /Users/abokor/work/hdp/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AHSClient.java:48:
>  warning: no @return
> [ERROR] public static AHSClient createAHSClient() {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4987) Read cache concurrency issue between read and evict in EntityGroupFS timeline store

2016-05-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304102#comment-15304102
 ] 

Hudson commented on YARN-4987:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9876 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9876/])
YARN-4987. Read cache concurrency issue between read and evict in (junping_du: 
rev 705286ccaeea36941d97ec1c1700746b74264924)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/TestEntityGroupFSTimelineStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/EntityGroupFSTimelineStore.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/test/java/org/apache/hadoop/yarn/server/timeline/EntityGroupPlugInForTest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/EntityCacheItem.java


> Read cache concurrency issue between read and evict in EntityGroupFS timeline 
> store 
> 
>
> Key: YARN-4987
> URL: https://issues.apache.org/jira/browse/YARN-4987
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: YARN-4987-trunk.001.patch, YARN-4987-trunk.002.patch, 
> YARN-4987-trunk.003.patch, YARN-4987-trunk.004.patch
>
>
> To handle concurrency issues, key value based timeline storage may return 
> null on reads that are concurrent to service stop. This is actually caused by 
> a concurrency issue between cache reads and evicts. Specifically, if the 
> storage is being read when it gets evicted, the storage may turn into null. 
> EntityGroupFS timeline store needs to handle this case gracefully. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5166) javadoc:javadoc goal fails on hadoop-yarn-client

2016-05-27 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304099#comment-15304099
 ] 

Akira AJISAKA commented on YARN-5166:
-

Thanks [~boky01] for reporting this and providing the patch. +1, committing 
this shortly.

> javadoc:javadoc goal fails on hadoop-yarn-client
> 
>
> Key: YARN-5166
> URL: https://issues.apache.org/jira/browse/YARN-5166
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: YARN-5166.01.patch
>
>
> {code}mvn clean javadoc:javadoc -DskipTests{code}
> {code}
> 2 errors
> 180 warnings
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 5.758 s
> [INFO] Finished at: 2016-05-27T00:21:42+02:00
> [INFO] Final Memory: 29M/391M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:javadoc (default-cli) on 
> project hadoop-yarn-client: An error has occurred in JavaDocs report 
> generation:
> [ERROR] Exit code: 1 - 
> /Users/abokor/work/hdp/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AHSClient.java:48:
>  warning: no @return
> [ERROR] public static AHSClient createAHSClient() {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3368) [Umbrella] YARN web UI: Next generation

2016-05-27 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304095#comment-15304095
 ] 

Kai Sasaki commented on YARN-3368:
--

[~sunilg] Thank you so much. After I launched resourcemanager on localhost, I 
can see the new WebUI.
But initially I launched resourcemanager on docker cotainer in docker-machine 
on my local Mac. So there might be some problem about cross-domain, I think. 
Anyway it works with localhost resourcemanager. Thanks!

> [Umbrella] YARN web UI: Next generation
> ---
>
> Key: YARN-3368
> URL: https://issues.apache.org/jira/browse/YARN-3368
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jian He
> Attachments: (Dec 3 2015) yarn-ui-screenshots.zip, (POC, Aug-2015)) 
> yarn-ui-screenshots.zip
>
>
> The goal is to improve YARN UI for better usability.
> We may take advantage of some existing front-end frameworks to build a 
> fancier, easier-to-use UI. 
> The old UI continue to exist until  we feel it's ready to flip to the new UI.
> This serves as an umbrella jira to track the tasks. we can do this in a 
> branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5130) Mark ContainerStatus and NodeReport as evolving

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304036#comment-15304036
 ] 

Hadoop QA commented on YARN-5130:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806627/YARN-5130.001.patch |
| JIRA Issue | YARN-5130 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 72ba829906bb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bde819a |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11732/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11732/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Mark ContainerStatus and NodeReport as evolving
> ---
>
> Key: YARN-5130
> URL: https://issues.apache.org/jira/browse/YARN-5130
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-5130.001.patch
>
>
> It turns out that 

[jira] [Commented] (YARN-5130) Mark ContainerStatus and NodeReport as evolving

2016-05-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304020#comment-15304020
 ] 

Gergely Novák commented on YARN-5130:
-

Added patch for [~ste...@apache.org]'s suggestion.

> Mark ContainerStatus and NodeReport as evolving
> ---
>
> Key: YARN-5130
> URL: https://issues.apache.org/jira/browse/YARN-5130
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-5130.001.patch
>
>
> It turns out that slider won't build as the {{ContainerStatus}} and 
> {{NodeReport}} classes have added more abstract methods, so breaking the mock 
> objects.
> While it is everyone's freedom to change things, these classes are both tagged
> {code}
> @Public
> @Stable
> {code}
> Given they aren't stable, can someone mark them as {{@Evolving}}? That way 
> when downstream code breaks, we can be less disappointed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5130) Mark ContainerStatus and NodeReport as evolving

2016-05-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5130:

Attachment: YARN-5130.001.patch

> Mark ContainerStatus and NodeReport as evolving
> ---
>
> Key: YARN-5130
> URL: https://issues.apache.org/jira/browse/YARN-5130
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: YARN-5130.001.patch
>
>
> It turns out that slider won't build as the {{ContainerStatus}} and 
> {{NodeReport}} classes have added more abstract methods, so breaking the mock 
> objects.
> While it is everyone's freedom to change things, these classes are both tagged
> {code}
> @Public
> @Stable
> {code}
> Given they aren't stable, can someone mark them as {{@Evolving}}? That way 
> when downstream code breaks, we can be less disappointed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5130) Mark ContainerStatus and NodeReport as evolving

2016-05-27 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák reassigned YARN-5130:
---

Assignee: Gergely Novák

> Mark ContainerStatus and NodeReport as evolving
> ---
>
> Key: YARN-5130
> URL: https://issues.apache.org/jira/browse/YARN-5130
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Gergely Novák
>Priority: Minor
>
> It turns out that slider won't build as the {{ContainerStatus}} and 
> {{NodeReport}} classes have added more abstract methods, so breaking the mock 
> objects.
> While it is everyone's freedom to change things, these classes are both tagged
> {code}
> @Public
> @Stable
> {code}
> Given they aren't stable, can someone mark them as {{@Evolving}}? That way 
> when downstream code breaks, we can be less disappointed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4844) Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource

2016-05-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303914#comment-15303914
 ] 

Hadoop QA commented on YARN-4844:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 66 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 
40s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-client in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 44s 
{color} | {color:red} root: patch generated 218 new + 5503 unchanged - 176 
fixed = 5721 total (was 5679) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
40s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch has 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 
55s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 37s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 26s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 7s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 16s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 57s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 42s 
{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 28s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} 

[jira] [Commented] (YARN-5141) Get Container logs for the Running application from Yarn Logs CommandLine

2016-05-27 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303913#comment-15303913
 ] 

Varun Vasudev commented on YARN-5141:
-

[~xgong] - patch looks good. Please fix the checkstyle issue.

> Get Container logs for the Running application from Yarn Logs CommandLine
> -
>
> Key: YARN-5141
> URL: https://issues.apache.org/jira/browse/YARN-5141
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5141.1.patch
>
>
> Currently, we can only get container logs for Finished applications



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >