[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320023#comment-15320023
 ] 

Hadoop QA commented on YARN-5170:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
9s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
12s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 28s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 33 new + 0 unchanged - 2 fixed = 33 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed 
with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 16s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-jdk1.7.0_101
 with JDK v1.7.0_101 generated 11 new + 0 unchanged - 0 fixed = 11 total (was 
0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 41s {color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch 
failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_101. {color} |
| 

[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-06-07 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320001#comment-15320001
 ] 

Naganarasimha G R commented on YARN-4464:
-

Thanks for the comments [~vinodkv],
bq.  I remember that Jian He did some benchmarking to demonstrate that recovery 
of 10K apps takes only 10 seconds. We need to understand the root-cause here.
You are right though initially there were some discussions on the propable 
cause for the delay later on it just went on modifying the default value. 
Initially i thought it might be because of YARN-3104 (as mentioned by [~kasha]) 
or YARN-4041, but not quite sure about it.

But having said that, i was thinking more in the lines whether its required to 
store so many finished apps when we are already supporting ATS. Apart from 
adding to the startup time (though nominal but unnecessary when we have many 
running apps in large cluster) it was also adding lot of unnecessary logs and 
publish of ATS events. Hence was more inclined  to reducing the default value.



> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319991#comment-15319991
 ] 

Vrushali C commented on YARN-5170:
--

Alright, it picked H6 now 
https://builds.apache.org/job/PreCommit-YARN-Build/11909/


> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319989#comment-15319989
 ] 

Vrushali C commented on YARN-5170:
--

Looks like build failed again 
https://builds.apache.org/job/PreCommit-YARN-Build/11897/

Retriggered build around 5 or 6 times but each time, it got scheduled on H8 
machine. Any idea how I can pass in a parameter to run on H6? 


> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319900#comment-15319900
 ] 

Hadoop QA commented on YARN-5204:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 53s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808825/YARN-5204.005.patch |
| JIRA Issue | YARN-5204 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3649b58d1170 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 76f0800 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11902/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11902/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Properly report status of killed/stopped queued containers
> --
>
> Key: YARN-5204
> URL: https://issues.apache.org/jira/browse/YARN-5204
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5204.001.patch, YARN-5204.002.patch, 
> YARN-5204.003.patch, YARN-5204.004.patch, YARN-5204.005.patch
>
>
> When a 

[jira] [Updated] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5204:
-
Attachment: YARN-5204.005.patch

Attaching new version.

> Properly report status of killed/stopped queued containers
> --
>
> Key: YARN-5204
> URL: https://issues.apache.org/jira/browse/YARN-5204
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5204.001.patch, YARN-5204.002.patch, 
> YARN-5204.003.patch, YARN-5204.004.patch, YARN-5204.005.patch
>
>
> When a queued container gets killed or stopped, we need to report its status 
> in the {{getContainerStatusInternal}} method of the 
> {{QueuingContainerManagerImpl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319864#comment-15319864
 ] 

Hadoop QA commented on YARN-5204:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 59s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808817/YARN-5204.004.patch |
| JIRA Issue | YARN-5204 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8294f85bcf6e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 76f0800 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11900/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11900/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11900/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11900/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Properly 

[jira] [Commented] (YARN-5188) FairScheduler performance bug

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319860#comment-15319860
 ] 

Hadoop QA commented on YARN-5188:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-5188 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808820/YARN-5188-1.patch |
| JIRA Issue | YARN-5188 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11901/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0
>Reporter: ChenFolin
> Attachments: YARN-5188-1.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
> There are two point which can cause this performance issue.
> 1: application sort before assign container at FSLeafQueue. TreeSet is not 
> the best, Why not keep orderly ? and then we can use binary search to help 
> keep orderly when a application's resource usage has changed.
> 2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
> resource usage ,Why can we store the leafqueue usage at memory and update it 
> when assign container op release container happen?
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5188) FairScheduler performance bug

2016-06-07 Thread ChenFolin (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309375#comment-15309375
 ] 

ChenFolin edited comment on YARN-5188 at 6/8/16 2:09 AM:
-

Thanks Xianyin Xin,

TreeSet oparate time complexity here is : O(log1) + O(log2) + ... + O(log(app 
num)), and binary search O(log(app num)).

point 2 is the same with YARN-4090.


was (Author: chenfolin):
Thanks Xianyin Xin,

TreeSet oparate time complexity here is : O(log1) + O(log2) + ... + O(log(app 
num)), and binary search O(log(app num)).

point 2 is the same with YARN-4090.

And can you tell me your phone number or wechat or qq ? Send a mail to 
chenfo...@gmail.com 

> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0
>Reporter: ChenFolin
> Attachments: YARN-5188-1.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
> There are two point which can cause this performance issue.
> 1: application sort before assign container at FSLeafQueue. TreeSet is not 
> the best, Why not keep orderly ? and then we can use binary search to help 
> keep orderly when a application's resource usage has changed.
> 2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
> resource usage ,Why can we store the leafqueue usage at memory and update it 
> when assign container op release container happen?
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5188) FairScheduler performance bug

2016-06-07 Thread ChenFolin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChenFolin updated YARN-5188:

Attachment: (was: YARN-5188-1.patch)

> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0
>Reporter: ChenFolin
> Attachments: YARN-5188-1.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
> There are two point which can cause this performance issue.
> 1: application sort before assign container at FSLeafQueue. TreeSet is not 
> the best, Why not keep orderly ? and then we can use binary search to help 
> keep orderly when a application's resource usage has changed.
> 2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
> resource usage ,Why can we store the leafqueue usage at memory and update it 
> when assign container op release container happen?
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5188) FairScheduler performance bug

2016-06-07 Thread ChenFolin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChenFolin updated YARN-5188:

Attachment: YARN-5188-1.patch

> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0
>Reporter: ChenFolin
> Attachments: YARN-5188-1.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
> There are two point which can cause this performance issue.
> 1: application sort before assign container at FSLeafQueue. TreeSet is not 
> the best, Why not keep orderly ? and then we can use binary search to help 
> keep orderly when a application's resource usage has changed.
> 2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
> resource usage ,Why can we store the leafqueue usage at memory and update it 
> when assign container op release container happen?
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5188) FairScheduler performance bug

2016-06-07 Thread ChenFolin (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChenFolin updated YARN-5188:

Attachment: (was: YARN-5188.patch)

> FairScheduler performance bug
> -
>
> Key: YARN-5188
> URL: https://issues.apache.org/jira/browse/YARN-5188
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.5.0
>Reporter: ChenFolin
> Attachments: YARN-5188-1.patch
>
>
>  My Hadoop Cluster has recently encountered a performance problem. Details as 
> Follows.
> There are two point which can cause this performance issue.
> 1: application sort before assign container at FSLeafQueue. TreeSet is not 
> the best, Why not keep orderly ? and then we can use binary search to help 
> keep orderly when a application's resource usage has changed.
> 2: queue sort and assignContainerPreCheck will lead to compute all leafqueue 
> resource usage ,Why can we store the leafqueue usage at memory and update it 
> when assign container op release container happen?
>
>The efficiency of assign container in the Resourcemanager may fall 
> when the number of running and pending application grows. And the fact is the 
> cluster has too many PendingMB or PengdingVcore , and the Cluster 
> current utilization rate may below 20%.
>I checked the resourcemanager logs, I found that every assign 
> container may cost 5 ~ 10 ms, but just 0 ~ 1 ms at usual time.
>  
>I use TestFairScheduler to reproduce the scene:
>  
>Just one queue: root.defalut
>  10240 apps.
>  
>assign container avg time:  6753.9 us ( 6.7539 ms)  
>  apps sort time (FSLeafQueue : Collections.sort(runnableApps, 
> comparator); ): 4657.01 us ( 4.657 ms )
>  compute LeafQueue Resource usage : 905.171 us ( 0.905171 ms )
>  
>  When just root.default, one assign container op contains : ( one apps 
> sort op ) + 2 * ( compute leafqueue usage op )
>According to the above situation, I think the assign container op has 
> a performance problem  . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5212) Run existing ContainerManager tests using QueuingContainerManagerImpl

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319853#comment-15319853
 ] 

Hadoop QA commented on YARN-5212:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 11s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 11s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808815/YARN-5212.001.patch |
| JIRA Issue | YARN-5212 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 89f7b12a534f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 76f0800 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11899/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11899/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11899/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11899/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-06-07 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319851#comment-15319851
 ] 

Jian He commented on YARN-1942:
---

patch looks good, are the test failures related ?

> Many of ConverterUtils methods need to have public interfaces
> -
>
> Key: YARN-1942
> URL: https://issues.apache.org/jira/browse/YARN-1942
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.4.0
>Reporter: Thomas Graves
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-1942-branch-2.0012.patch, YARN-1942.1.patch, 
> YARN-1942.10.patch, YARN-1942.11.patch, YARN-1942.12.patch, 
> YARN-1942.2.patch, YARN-1942.3.patch, YARN-1942.4.patch, YARN-1942.5.patch, 
> YARN-1942.6.patch, YARN-1942.8.patch, YARN-1942.9.patch
>
>
> ConverterUtils has a bunch of functions that are useful to application 
> masters.   It should either be made public or we make some of the utilities 
> in it public or we provide other external apis for application masters to 
> use.  Note that distributedshell and MR are both using these interfaces. 
> For instance the main use case I see right now is for getting the application 
> attempt id within the appmaster:
> String containerIdStr =
>   System.getenv(Environment.CONTAINER_ID.name());
> ConverterUtils.toContainerId
> ContainerId containerId = ConverterUtils.toContainerId(containerIdStr);
>   ApplicationAttemptId applicationAttemptId =
>   containerId.getApplicationAttemptId();
> I don't see any other way for the application master to get this information. 
>  If there is please let me know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319844#comment-15319844
 ] 

Hadoop QA commented on YARN-5204:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 15s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 12s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808807/YARN-5204.003.patch |
| JIRA Issue | YARN-5204 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3b551dd85ca8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 76f0800 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11898/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11898/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11898/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11898/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Properly 

[jira] [Updated] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5204:
-
Attachment: YARN-5204.004.patch

Increasing timeout.

> Properly report status of killed/stopped queued containers
> --
>
> Key: YARN-5204
> URL: https://issues.apache.org/jira/browse/YARN-5204
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5204.001.patch, YARN-5204.002.patch, 
> YARN-5204.003.patch, YARN-5204.004.patch
>
>
> When a queued container gets killed or stopped, we need to report its status 
> in the {{getContainerStatusInternal}} method of the 
> {{QueuingContainerManagerImpl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319833#comment-15319833
 ] 

Hadoop QA commented on YARN-5204:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 8s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808804/YARN-5204.002.patch |
| JIRA Issue | YARN-5204 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0e6c28af5afc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 76f0800 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/11896/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/11896/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11896/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11896/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Properly report 

[jira] [Updated] (YARN-5212) Run existing ContainerManager tests using QueuingContainerManagerImpl

2016-06-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5212:
-
Attachment: YARN-5212.001.patch

Attaching patch.

> Run existing ContainerManager tests using QueuingContainerManagerImpl
> -
>
> Key: YARN-5212
> URL: https://issues.apache.org/jira/browse/YARN-5212
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5212.001.patch
>
>
> The existing {{TestContainerManager}} test class will be modified to be able 
> to use both the {{ContainerManagerImpl}} and the 
> {{QueuingContainerManagerImpl}} during the tests. This way we will make sure 
> that no regression was introduced in the existing cases by the 
> {{QueingContainerManagerImpl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319823#comment-15319823
 ] 

Hadoop QA commented on YARN-1942:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 36 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
18s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 14s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
56s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 42s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
32s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 13s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
branch-2 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 45s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 52s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 5s 
{color} | {color:red} root: The patch generated 112 new + 2946 unchanged - 34 
fixed = 3058 total (was 2980) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
36s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 14m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 3s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 37s 
{color} | {color:green} hadoop-yarn-server-nodemanager in 

[jira] [Updated] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5204:
-
Attachment: YARN-5204.003.patch

Removing some superfluous methods from the {{TestQueuingContainerManager}.

> Properly report status of killed/stopped queued containers
> --
>
> Key: YARN-5204
> URL: https://issues.apache.org/jira/browse/YARN-5204
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5204.001.patch, YARN-5204.002.patch, 
> YARN-5204.003.patch
>
>
> When a queued container gets killed or stopped, we need to report its status 
> in the {{getContainerStatusInternal}} method of the 
> {{QueuingContainerManagerImpl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5170:
---
Attachment: YARN-5170-YARN-2928.05.patch

Ah, 04 was an incomplete patch w/o the new classes.
Adding patch 05.

Thanks [~vrushalic] and [~varun_saxena] for helping to get the build un-stuck.

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5204:
-
Attachment: YARN-5204.002.patch

Fixing minor issue.

> Properly report status of killed/stopped queued containers
> --
>
> Key: YARN-5204
> URL: https://issues.apache.org/jira/browse/YARN-5204
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5204.001.patch, YARN-5204.002.patch
>
>
> When a queued container gets killed or stopped, we need to report its status 
> in the {{getContainerStatusInternal}} method of the 
> {{QueuingContainerManagerImpl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5204) Properly report status of killed/stopped queued containers

2016-06-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5204:
-
Attachment: YARN-5204.001.patch

Attaching patch.
Status of killed/stopped queued containers is now properly reported.
Also included additional test cases.

> Properly report status of killed/stopped queued containers
> --
>
> Key: YARN-5204
> URL: https://issues.apache.org/jira/browse/YARN-5204
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5204.001.patch
>
>
> When a queued container gets killed or stopped, we need to report its status 
> in the {{getContainerStatusInternal}} method of the 
> {{QueuingContainerManagerImpl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-06-07 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319802#comment-15319802
 ] 

Vinod Kumar Vavilapalli commented on YARN-4464:
---

Tx for the ping [~templedf].

I haven't paid attention to this before. Apologies for pitching in very late.

bq. recovery process was very slow. I have waited about 20min. 
Did we ever find out why this takes 20mins? As part of original recovery 
feature, I remember that [~jianhe] did some benchmarking to demonstrate that 
recovery of 10K apps takes only 10 seconds. We need to understand the 
root-cause here.

Irrespective of that, even in trunk, if we prove that recovery takes much 
longer than 10 seconds in some unavoidable cases, the right solution is to make 
the recovery of completed applications alone to be in the background.

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319788#comment-15319788
 ] 

Hudson commented on YARN-5176:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9926 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9926/])
YARN-5176. More test cases for queuing of containers at the NM. (arun suresh: 
rev 76f0800c21f49fba01694cbdc870103053da802c)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/queuing/TestQueuingContainerManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/queuing/QueuingContainerManagerImpl.java


> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.8.0
>
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4837) User facing aspects of 'AM blacklisting' feature need fixing

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319767#comment-15319767
 ] 

Hadoop QA commented on YARN-4837:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 2s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
43s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 9s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
50s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 20s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
branch-2 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 54s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 1493 unchanged - 17 fixed = 1494 total (was 1510) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 6s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 54s {color} 

[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319753#comment-15319753
 ] 

Arun Suresh commented on YARN-5176:
---

+1

> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319705#comment-15319705
 ] 

Hadoop QA commented on YARN-5176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 152 unchanged - 4 fixed = 153 total (was 156) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 25s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 51s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808776/YARN-5176.003.patch |
| JIRA Issue | YARN-5176 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 39d8365ff0f9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 58be55b |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11895/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11895/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11895/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: 

[jira] [Commented] (YARN-5180) Allow ResourceRequest to specify an enforceExecutionType flag

2016-06-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319664#comment-15319664
 ] 

Hudson commented on YARN-5180:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9925 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9925/])
Addendum patch for YARN-5180 updating findbugs-exclude.xml (arun suresh: rev 
8554aee1bef5aff9e49e5e9119d6a7a4abf1c432)
* hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml


> Allow ResourceRequest to specify an enforceExecutionType flag
> -
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.9.0
>
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch, 
> YARN-5180.003.patch, YARN-5180.004.patch, YARN-5180.005.patch, 
> YARN-5180.006.patch, YARN-5180.007.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5199) Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream in NMWebServices#getLogs

2016-06-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319665#comment-15319665
 ] 

Hudson commented on YARN-5199:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9925 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9925/])
YARN-5199. Close LogReader in in AHSWebServices#getStreamingOutput and (xgong: 
rev 58be55b6e07b94aa55ed87c461f3e5c04cc61630)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NMWebServices.java


> Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream 
> in NMWebServices#getLogs
> 
>
> Key: YARN-5199
> URL: https://issues.apache.org/jira/browse/YARN-5199
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5199.1.patch, YARN-5199.2.patch, YARN-5199.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319662#comment-15319662
 ] 

Hadoop QA commented on YARN-5170:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
18s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 47s 
{color} | {color:red} hadoop-yarn-server in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 47s {color} 
| {color:red} hadoop-yarn-server in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 59s 
{color} | {color:red} hadoop-yarn-server in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 59s {color} 
| {color:red} hadoop-yarn-server in the patch failed with JDK v1.7.0_101. 
{color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 22 new + 0 unchanged - 2 fixed = 22 total (was 2) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed 
with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-jdk1.7.0_101
 with JDK v1.7.0_101 generated 22 new + 0 unchanged 

[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319661#comment-15319661
 ] 

Hadoop QA commented on YARN-5176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s {color} 
| {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808776/YARN-5176.003.patch |
| JIRA Issue | YARN-5176 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11894/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5199) Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream in NMWebServices#getLogs

2016-06-07 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319652#comment-15319652
 ] 

Xuan Gong commented on YARN-5199:
-

Thanks for the review, Junping and Varun.
Committed into trunk/branch-2.

> Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream 
> in NMWebServices#getLogs
> 
>
> Key: YARN-5199
> URL: https://issues.apache.org/jira/browse/YARN-5199
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5199.1.patch, YARN-5199.2.patch, YARN-5199.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319646#comment-15319646
 ] 

Hadoop QA commented on YARN-5176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s {color} 
| {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808776/YARN-5176.003.patch |
| JIRA Issue | YARN-5176 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11893/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5180) Allow ResourceRequest to specify an enforceExecutionType flag

2016-06-07 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh resolved YARN-5180.
---
Resolution: Fixed

Committed addendum patch to trunk and branch-2

> Allow ResourceRequest to specify an enforceExecutionType flag
> -
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.9.0
>
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch, 
> YARN-5180.003.patch, YARN-5180.004.patch, YARN-5180.005.patch, 
> YARN-5180.006.patch, YARN-5180.007.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5180) Allow ResourceRequest to specify an enforceExecutionType flag

2016-06-07 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh reopened YARN-5180:
---

Reopening to include entry in findbugs-excludes..

> Allow ResourceRequest to specify an enforceExecutionType flag
> -
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.9.0
>
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch, 
> YARN-5180.003.patch, YARN-5180.004.patch, YARN-5180.005.patch, 
> YARN-5180.006.patch, YARN-5180.007.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319630#comment-15319630
 ] 

Hadoop QA commented on YARN-5176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s {color} 
| {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808776/YARN-5176.003.patch |
| JIRA Issue | YARN-5176 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11892/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5155) [YARN-3368] Show pending resource requests on application-attempt page

2016-06-07 Thread Chen Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Ge updated YARN-5155:
--
Attachment: YARN-5155.wip.1.patch

Attaching wip patch

> [YARN-3368] Show pending resource requests on application-attempt page
> --
>
> Key: YARN-5155
> URL: https://issues.apache.org/jira/browse/YARN-5155
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Chen Ge
> Attachments: YARN-5155.wip.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5170:
---
Attachment: YARN-5170-YARN-2928.04.patch

Reduced trippple creation of FlowRunRowKey on the write path.

Unit tests run (locally), no findbugs introduced (local runs).
Path 04 is code-complete for this jira and ready for review.

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5176:
-
Attachment: YARN-5176.003.patch

Re-attaching the patch -- something seems to have been wrong with Jenkins last 
time.

> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5176:
-
Attachment: (was: YARN-5176.003.patch)

> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5137) Make DiskChecker pluggable

2016-06-07 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319603#comment-15319603
 ] 

Yufei Gu commented on YARN-5137:


By reading though the YARN-4271, we did want integrate the exist disk checker 
in NM: {{LocalDirsHandlerService}}. So cancel the current patch, I will upload 
a new patch.  

> Make DiskChecker pluggable
> --
>
> Key: YARN-5137
> URL: https://issues.apache.org/jira/browse/YARN-5137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>  Labels: supportability
> Attachments: YARN-5137.001.patch
>
>
> It would be nice to have the option for a DiskChecker that has more 
> sophisticated checking capabilities.  In order to do this, we would first 
> need DiskChecker to be pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest

2016-06-07 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319590#comment-15319590
 ] 

Arun Suresh commented on YARN-5124:
---

[~curino], I did infact try moving the remoteRequestTable to another class.. 
unfortunately, I don't think it would buy anything much.. considering that 
there are sections of the {{AMRMClientImpl}} that makes decisions based on some 
of the intermediate sub-maps.. thereby making it bit of a leaky abstraction.
In anycase, I do think your suggestion is good, but would probably exceed the 
scope of this JIRA since that would be quite a clean up effort to refactor the 
AMRMClientImpl to properly rip out the map into a separate entity.

> Modify AMRMClient to set the ExecutionType in the ResourceRequest
> -
>
> Key: YARN-5124
> URL: https://issues.apache.org/jira/browse/YARN-5124
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5124.001.patch, YARN-5124.002.patch, 
> YARN-5124.003.patch, YARN-5124.004.patch, YARN-5124.005.patch, 
> YARN-5124.006.patch, YARN-5124.008.patch, YARN-5124.009.patch, 
> YARN-5124.010.patch, YARN-5124.011.patch, 
> YARN-5124_YARN-5180_combined.007.patch, YARN-5124_YARN-5180_combined.008.patch
>
>
> Currently the {{ContainerRequest}} allows the AM to set the {{ExecutionType}} 
> in the AMRMClient, but it is not being set in the actual {{ResourceRequest}} 
> that is sent to the RM 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4837) User facing aspects of 'AM blacklisting' feature need fixing

2016-06-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319586#comment-15319586
 ] 

Wangda Tan commented on YARN-4837:
--

Committed the patch to trunk, will commit to branch-2/branch-2.8 once get +1 
from Jenkins.

> User facing aspects of 'AM blacklisting' feature need fixing
> 
>
> Key: YARN-4837
> URL: https://issues.apache.org/jira/browse/YARN-4837
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: YARN-4837-20160515.txt, YARN-4837-20160520.1.txt, 
> YARN-4837-20160520.txt, YARN-4837-20160527.txt, YARN-4837-20160604.txt, 
> YARN-4837-branch-2.005.patch
>
>
> Was reviewing the user-facing aspects that we are releasing as part of 2.8.0.
> Looking at the 'AM blacklisting feature', I see several things to be fixed 
> before we release it in 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4837) User facing aspects of 'AM blacklisting' feature need fixing

2016-06-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319567#comment-15319567
 ] 

Hudson commented on YARN-4837:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9923 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9923/])
YARN-4837. User facing aspects of 'AM blacklisting' feature need fixing. 
(wangda: rev 620325e81696fca140195b74929ed9eda2d5eb16)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesAppsModification.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/blacklist/DisabledBlacklistManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestRMAppTransitions.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/AMBlackListingRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/blacklist/SimpleBlacklistManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/blacklist/BlacklistUpdates.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/blacklist/TestBlacklistManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/AMBlackListingRequestPBImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* 

[jira] [Updated] (YARN-4837) User facing aspects of 'AM blacklisting' feature need fixing

2016-06-07 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4837:
-
Attachment: YARN-4837-branch-2.005.patch

Attached patch to branch-2 to trigger Jenkins build.

> User facing aspects of 'AM blacklisting' feature need fixing
> 
>
> Key: YARN-4837
> URL: https://issues.apache.org/jira/browse/YARN-4837
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: YARN-4837-20160515.txt, YARN-4837-20160520.1.txt, 
> YARN-4837-20160520.txt, YARN-4837-20160527.txt, YARN-4837-20160604.txt, 
> YARN-4837-branch-2.005.patch
>
>
> Was reviewing the user-facing aspects that we are releasing as part of 2.8.0.
> Looking at the 'AM blacklisting feature', I see several things to be fixed 
> before we release it in 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4837) User facing aspects of 'AM blacklisting' feature need fixing

2016-06-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319541#comment-15319541
 ] 

Wangda Tan commented on YARN-4837:
--

Committing this patch now. Findbug warning should cause by YARN-5180, already 
commented on the JIRA.

> User facing aspects of 'AM blacklisting' feature need fixing
> 
>
> Key: YARN-4837
> URL: https://issues.apache.org/jira/browse/YARN-4837
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
> Attachments: YARN-4837-20160515.txt, YARN-4837-20160520.1.txt, 
> YARN-4837-20160520.txt, YARN-4837-20160527.txt, YARN-4837-20160604.txt
>
>
> Was reviewing the user-facing aspects that we are releasing as part of 2.8.0.
> Looking at the 'AM blacklisting feature', I see several things to be fixed 
> before we release it in 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5180) Allow ResourceRequest to specify an enforceExecutionType flag

2016-06-07 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319538#comment-15319538
 ] 

Wangda Tan commented on YARN-5180:
--

Hi [~asuresh],

I think this committed patch introduced a new findbugs warning. Do you have 
JIRA opened for the findbug warning? If not, could you reopen and commit the 
addendum patch to fix the problem?

Thanks,

> Allow ResourceRequest to specify an enforceExecutionType flag
> -
>
> Key: YARN-5180
> URL: https://issues.apache.org/jira/browse/YARN-5180
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.9.0
>
> Attachments: YARN-5180.001.patch, YARN-5180.002.patch, 
> YARN-5180.003.patch, YARN-5180.004.patch, YARN-5180.005.patch, 
> YARN-5180.006.patch, YARN-5180.007.patch
>
>
> YARN-2882 introduced the concept of *ExecutionTypes*.
> YARN-4335 allowed AMs to specify the ExecutionType in the ResourceRequest.
> This JIRA proposes to add a boolean flag to the ResourceRequest to signal to 
> the Scheduler that the AM is fine receiving a Container with a different 
> Execution type than what is asked.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319532#comment-15319532
 ] 

Hadoop QA commented on YARN-5170:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
46s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
27s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 55s 
{color} | {color:red} hadoop-yarn-server in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 55s {color} 
| {color:red} hadoop-yarn-server in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 54s 
{color} | {color:red} hadoop-yarn-server in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 54s {color} 
| {color:red} hadoop-yarn-server in the patch failed with JDK v1.7.0_101. 
{color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 19 new + 1 unchanged - 1 fixed = 20 total (was 2) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed 
with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-jdk1.7.0_101
 with JDK v1.7.0_101 generated 18 new + 0 

[jira] [Commented] (YARN-5052) Update timeline service v2 documentation to capture information about filters

2016-06-07 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319500#comment-15319500
 ] 

Joep Rottinghuis commented on YARN-5052:


Thanks for the updates [~varun_saxena], patch 02 looks good to me.

> Update timeline service v2 documentation to capture information about filters
> -
>
> Key: YARN-5052
> URL: https://issues.apache.org/jira/browse/YARN-5052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: Apache Hadoop 3.0.0-SNAPSHOT – The YARN Timeline Service 
> v.pdf, The YARN Timeline Service v2.02.pdf, YARN-5052-YARN-2928.01.patch, 
> YARN-5052-YARN-2928.02.patch
>
>
> Since YARN-4447 has gone in, we can update our documentation to capture 
> information about usage of filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319481#comment-15319481
 ] 

Joep Rottinghuis commented on YARN-5170:


I'm annotating the *RowKeyConverter classes with @VisibleForTesting. They 
should not be directly used by clients. Instead the corresponding *RowKey 
should be used to get the rowkey, or the *RowKeyPrefix to get the prefix.

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319464#comment-15319464
 ] 

Vrushali C commented on YARN-5170:
--

Yes, I had resubmitted the build, but that failed too. Didn't notice the H8 
earlier. Now I have submitted it again and it's running on H6 
https://builds.apache.org/job/PreCommit-YARN-Build/11889/

thanks [~varun_saxena]! 

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5052) Update timeline service v2 documentation to capture information about filters

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319461#comment-15319461
 ] 

Hadoop QA commented on YARN-5052:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
57s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 8s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808758/YARN-5052-YARN-2928.02.patch
 |
| JIRA Issue | YARN-5052 |
| Optional Tests |  asflicense  mvnsite  xml  |
| uname | Linux 93e8331fcddb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2928 / 434e898 |
| modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11887/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Update timeline service v2 documentation to capture information about filters
> -
>
> Key: YARN-5052
> URL: https://issues.apache.org/jira/browse/YARN-5052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: Apache Hadoop 3.0.0-SNAPSHOT – The YARN Timeline Service 
> v.pdf, The YARN Timeline Service v2.02.pdf, YARN-5052-YARN-2928.01.patch, 
> YARN-5052-YARN-2928.02.patch
>
>
> Since YARN-4447 has gone in, we can update our documentation to capture 
> information about usage of filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319435#comment-15319435
 ] 

Varun Saxena edited comment on YARN-5210 at 6/7/16 9:08 PM:


Yes. In TestDistributedShell, we do not check for specific events. We just 
check for entity type file(as we use FS implementation). 


was (Author: varun_saxena):
Yes. In TestDistributedShell, we do not check for specific events. We just 
check for entity type file(with FS implementation). 

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn 
> means that this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> 

[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319435#comment-15319435
 ] 

Varun Saxena commented on YARN-5210:


Yes. In TestDistributedShell, we do not check for specific events. We just 
check for entity type file(with FS implementation). 

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn 
> means that this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> "yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
> },
> "configs": { }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To 

[jira] [Updated] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-06-07 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-1942:
-
Attachment: YARN-1942-branch-2.0012.patch

Attached patch for branch-2 as well.

> Many of ConverterUtils methods need to have public interfaces
> -
>
> Key: YARN-1942
> URL: https://issues.apache.org/jira/browse/YARN-1942
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.4.0
>Reporter: Thomas Graves
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-1942-branch-2.0012.patch, YARN-1942.1.patch, 
> YARN-1942.10.patch, YARN-1942.11.patch, YARN-1942.12.patch, 
> YARN-1942.2.patch, YARN-1942.3.patch, YARN-1942.4.patch, YARN-1942.5.patch, 
> YARN-1942.6.patch, YARN-1942.8.patch, YARN-1942.9.patch
>
>
> ConverterUtils has a bunch of functions that are useful to application 
> masters.   It should either be made public or we make some of the utilities 
> in it public or we provide other external apis for application masters to 
> use.  Note that distributedshell and MR are both using these interfaces. 
> For instance the main use case I see right now is for getting the application 
> attempt id within the appmaster:
> String containerIdStr =
>   System.getenv(Environment.CONTAINER_ID.name());
> ConverterUtils.toContainerId
> ContainerId containerId = ConverterUtils.toContainerId(containerIdStr);
>   ApplicationAttemptId applicationAttemptId =
>   containerId.getApplicationAttemptId();
> I don't see any other way for the application master to get this information. 
>  If there is please let me know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5052) Update timeline service v2 documentation to capture information about filters

2016-06-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319420#comment-15319420
 ] 

Varun Saxena commented on YARN-5052:


Updated a new patch.

By the way, in infofilters I have explained ene as well.
{quote}
"eq" means equals, "ne" means not equals and existence of key is not required 
for a match and "ene" means not equals but existence of key is
  required.
{quote}
Do let me know if it sounds a little cryptic.



> Update timeline service v2 documentation to capture information about filters
> -
>
> Key: YARN-5052
> URL: https://issues.apache.org/jira/browse/YARN-5052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: Apache Hadoop 3.0.0-SNAPSHOT – The YARN Timeline Service 
> v.pdf, The YARN Timeline Service v2.02.pdf, YARN-5052-YARN-2928.01.patch, 
> YARN-5052-YARN-2928.02.patch
>
>
> Since YARN-4447 has gone in, we can update our documentation to capture 
> information about usage of filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5052) Update timeline service v2 documentation to capture information about filters

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5052:
---
Attachment: YARN-5052-YARN-2928.02.patch

> Update timeline service v2 documentation to capture information about filters
> -
>
> Key: YARN-5052
> URL: https://issues.apache.org/jira/browse/YARN-5052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: Apache Hadoop 3.0.0-SNAPSHOT – The YARN Timeline Service 
> v.pdf, The YARN Timeline Service v2.02.pdf, YARN-5052-YARN-2928.01.patch, 
> YARN-5052-YARN-2928.02.patch
>
>
> Since YARN-4447 has gone in, we can update our documentation to capture 
> information about usage of filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5052) Update timeline service v2 documentation to capture information about filters

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5052:
---
Attachment: The YARN Timeline Service v2.02.pdf

> Update timeline service v2 documentation to capture information about filters
> -
>
> Key: YARN-5052
> URL: https://issues.apache.org/jira/browse/YARN-5052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: Apache Hadoop 3.0.0-SNAPSHOT – The YARN Timeline Service 
> v.pdf, The YARN Timeline Service v2.02.pdf, YARN-5052-YARN-2928.01.patch
>
>
> Since YARN-4447 has gone in, we can update our documentation to capture 
> information about usage of filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-06-07 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-1942:
-
Attachment: YARN-1942.12.patch

Attached ver.12 fixed javadoc warnings.

> Many of ConverterUtils methods need to have public interfaces
> -
>
> Key: YARN-1942
> URL: https://issues.apache.org/jira/browse/YARN-1942
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.4.0
>Reporter: Thomas Graves
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-1942.1.patch, YARN-1942.10.patch, 
> YARN-1942.11.patch, YARN-1942.12.patch, YARN-1942.2.patch, YARN-1942.3.patch, 
> YARN-1942.4.patch, YARN-1942.5.patch, YARN-1942.6.patch, YARN-1942.8.patch, 
> YARN-1942.9.patch
>
>
> ConverterUtils has a bunch of functions that are useful to application 
> masters.   It should either be made public or we make some of the utilities 
> in it public or we provide other external apis for application masters to 
> use.  Note that distributedshell and MR are both using these interfaces. 
> For instance the main use case I see right now is for getting the application 
> attempt id within the appmaster:
> String containerIdStr =
>   System.getenv(Environment.CONTAINER_ID.name());
> ConverterUtils.toContainerId
> ContainerId containerId = ConverterUtils.toContainerId(containerIdStr);
>   ApplicationAttemptId applicationAttemptId =
>   containerId.getApplicationAttemptId();
> I don't see any other way for the application master to get this information. 
>  If there is please let me know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319370#comment-15319370
 ] 

Hadoop QA commented on YARN-5176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 6s {color} 
| {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808751/YARN-5176.003.patch |
| JIRA Issue | YARN-5176 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11886/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319345#comment-15319345
 ] 

Konstantinos Karanasos edited comment on YARN-5176 at 6/7/16 8:35 PM:
--

PS: Created YARN-5212 to address this.
bq. I propose we have a parametrized testcase similar to what is done for the 
Schedulers to test the common scenarios for both ContainerManagers.


was (Author: kkaranasos):
PS: Creating YARN-5212 to address this.
bq. I propose we have a parametrized testcase similar to what is done for the 
Schedulers to test the common scenarios for both ContainerManagers.

> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319345#comment-15319345
 ] 

Konstantinos Karanasos commented on YARN-5176:
--

PS: Creating YARN-5212 to address this.
bq. I propose we have a parametrized testcase similar to what is done for the 
Schedulers to test the common scenarios for both ContainerManagers.

> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5212) Run existing ContainerManager tests using QueuingContainerManagerImpl

2016-06-07 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-5212:


 Summary: Run existing ContainerManager tests using 
QueuingContainerManagerImpl
 Key: YARN-5212
 URL: https://issues.apache.org/jira/browse/YARN-5212
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Konstantinos Karanasos
Assignee: Konstantinos Karanasos


The existing {{TestContainerManager}} test class will be modified to be able to 
use both the {{ContainerManagerImpl}} and the {{QueuingContainerManagerImpl}} 
during the tests. This way we will make sure that no regression was introduced 
in the existing cases by the {{QueingContainerManagerImpl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319328#comment-15319328
 ] 

Konstantinos Karanasos commented on YARN-5176:
--

Thanks for the review, [~asuresh]!

I agree with creating a parameterized test for testing both 
{{ContainerManagerImpl}} and {{QueuingContainerManagerImpl}} against the 
original tests written for the {{ContainerManagerImpl}}.

I increased the timeout from 20 to 30sec (locally the test case runs properly). 
I think it should be a timing issue, because the ContainerImpl goes from 
CONTAINER_CLEANEDUP_AFTER_KILL to DONE state, so if we wait long enough, we 
should always reach the DONE state. Will upload the new patch now to kick off 
Jenkins.

> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5176:
-
Attachment: YARN-5176.003.patch

Fixing the timing issue for one of the test cases.

> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch, 
> YARN-5176.003.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319311#comment-15319311
 ] 

Varun Saxena commented on YARN-5170:


This is a long standing issue. YARN-2928 build always breaks on Jenkins Host H8.
You can resubmit the build.

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned YARN-5170:
--

Assignee: Varun Saxena  (was: Joep Rottinghuis)

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319277#comment-15319277
 ] 

Li Lu commented on YARN-5210:
-

LGTM. Did we miss this in TestDistributedShell all the time? 

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn 
> means that this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> "yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
> },
> "configs": { }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Commented] (YARN-5177) Make Node-Manager Download-Resource Component extensible.

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319267#comment-15319267
 ] 

Hadoop QA commented on YARN-5177:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 7s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk 
has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 3s 
{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 3s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 50s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 5 
new + 404 unchanged - 1 fixed = 409 total (was 405) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 21s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 13s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 2s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808741/YARN-5177-V0.patch |
| JIRA Issue | YARN-5177 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 34043c33df83 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 

[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319218#comment-15319218
 ] 

Vrushali C commented on YARN-5170:
--

Perhaps your patch is not off of the latest head, I think,  [~jrottinghuis] 

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319214#comment-15319214
 ] 

Vrushali C commented on YARN-5170:
--

I have restarted the build 
https://builds.apache.org/job/PreCommit-YARN-Build/11885/


> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-07 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319213#comment-15319213
 ] 

Vrushali C commented on YARN-5170:
--

Here is the console output of the jenkins build 

https://builds.apache.org/job/PreCommit-YARN-Build/11869/console

{code}
Console Output

[EnvInject] - Mask passwords passed as build parameters.
Started by remote host 127.0.0.1
[EnvInject] - Loading node environment variables.
Building remotely on H8 (Mapreduce Falcon Hadoop Pig Zookeeper Tez Hdfs 
yahoo-not-h2) in workspace 
/home/jenkins/jenkins-slave/workspace/PreCommit-YARN-Build
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision bddea5fe5fe72eee8e2ecfcec616bd8ceb4d72e7 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f bddea5fe5fe72eee8e2ecfcec616bd8ceb4d72e7
 > git rev-list bddea5fe5fe72eee8e2ecfcec616bd8ceb4d72e7 # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
No emails were triggered.
[PreCommit-YARN-Build] $ /bin/bash /tmp/hudson2198946111292113128.sh
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
100  316k  100  316k0 0   788k  0 --:--:-- --:--:-- --:--:--  790k
Modes:  MultiJDK  Jenkins  Robot  Docker  ResetRepo  UnitTests 
Processing: YARN-5170
YARN-5170 patch is being downloaded at Tue Jun  7 08:28:40 UTC 2016 from
  
https://issues.apache.org/jira/secure/attachment/12808608/YARN-5170-YARN-2928.03.patch
 -> Downloaded




Confirming git environment




HEAD is now at bddea5f YARN-5118. Tests fails with localizer port bind 
exception. Contributed by Brahma Reddy Battula.
Previous HEAD position was bddea5f... YARN-5118. Tests fails with localizer 
port bind exception. Contributed by Brahma Reddy Battula.
Switched to branch 'trunk'
Your branch is behind 'origin/trunk' by 3 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)
First, rewinding head to replay your work on top of it...
Fast-forwarded trunk to bddea5fe5fe72eee8e2ecfcec616bd8ceb4d72e7.
Switched to branch 'YARN-2928'
Your branch and 'origin/YARN-2928' have diverged,
and have 85 and 793 different commits each, respectively.
  (use "git pull" to merge the remote branch into yours)
First, rewinding head to replay your work on top of it...
Applying: YARN-3063. Bootstrapping TimelineServer next generation module. 
Contributed by Zhijie Shen.
Using index info to reconstruct a base tree...
M   hadoop-project/pom.xml
A   hadoop-yarn-project/CHANGES.txt
M   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
:47: trailing whitespace.
Trunk - Unreleased 
warning: 1 line adds whitespace errors.
Falling back to patching base and 3-way merge...
Auto-merging hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
CONFLICT (content): Merge conflict in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
Auto-merging 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
CONFLICT (add/add): Merge conflict in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/pom.xml
CONFLICT (modify/delete): hadoop-yarn-project/CHANGES.txt deleted in HEAD and 
modified in YARN-3063. Bootstrapping TimelineServer next generation module. 
Contributed by Zhijie Shen.. Version YARN-3063. Bootstrapping TimelineServer 
next generation module. Contributed by Zhijie Shen. of 
hadoop-yarn-project/CHANGES.txt left in tree.
Auto-merging hadoop-project/pom.xml
CONFLICT (content): Merge conflict in hadoop-project/pom.xml
Failed to merge in the changes.
Patch failed at 0001 YARN-3063. Bootstrapping TimelineServer next generation 
module. Contributed by Zhijie Shen.
The copy of the patch that failed is found in:
   

[jira] [Commented] (YARN-5052) Update timeline service v2 documentation to capture information about filters

2016-06-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319204#comment-15319204
 ] 

Varun Saxena commented on YARN-5052:


Thanks [~jrottinghuis] for the comments. Will fix and upload a patch before 
signing off for the day.

> Update timeline service v2 documentation to capture information about filters
> -
>
> Key: YARN-5052
> URL: https://issues.apache.org/jira/browse/YARN-5052
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: Apache Hadoop 3.0.0-SNAPSHOT – The YARN Timeline Service 
> v.pdf, YARN-5052-YARN-2928.01.patch
>
>
> Since YARN-4447 has gone in, we can update our documentation to capture 
> information about usage of filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5177) Make Node-Manager Download-Resource Component extensible.

2016-06-07 Thread Emeka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Emeka updated YARN-5177:

Attachment: YARN-5177-V0.patch

> Make Node-Manager Download-Resource Component extensible.
> -
>
> Key: YARN-5177
> URL: https://issues.apache.org/jira/browse/YARN-5177
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.7.0
>Reporter: Emeka
>Priority: Minor
> Attachments: YARN-5177-V0.patch
>
>
> Problem:
> - Downloading files to a local machine/node is called "resource-localization".
> - There are two components that perform resource-location (PublicLocalizer 
> and ComponetLocalizers)
> - Both components utilizes FSDownload.class to perform their downloads.
> - We need a custom implementation of FSDownload.
> Solution:
> - With this change, we make FSDownload.class extensible by wrapping it in a 
> new ResourceDownloader.interface
> - We also update the PublicLocalizer and ComponetLocalizers to load 
> ResourceDownloader rather than FSDownload.
> - NOTE: We use reflection to load the right implementation during runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319189#comment-15319189
 ] 

Varun Saxena commented on YARN-5210:


QA report comes out clean. 
Kindly review.

Haven't added tests but have verified the fix in my setup.

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn 
> means that this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> "yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
> },
> "configs": { }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2016-06-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319183#comment-15319183
 ] 

Varun Saxena commented on YARN-2962:


Thanks [~asuresh]. 
Seems patch is not applying. Will upload a patch after rebasing to trunk and 
fixing Daniel's comments by tomorrow(as its late night here). Maybe you can 
have a look then.
It would be good if we can get this into 3.0.0-alpha because we were thinking 
of including this fix in our private code.

I think one part which mainly needed a discussion on, was how do we delete the 
parent application node(application is now split into 2 nodes) if it contains 
no children. This check is currently done when application is being removed. We 
are not having the whole operation under a single fencing as we have to check 
number children after deletion. If 2 RMs' can ever become active at same time, 
this can potentially lead to a race. Maybe we can just swallow 
NotEmptyException during deletion of parent.

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.01.patch, YARN-2962.04.patch, 
> YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319176#comment-15319176
 ] 

Hadoop QA commented on YARN-5210:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
22s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
35s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 0 new + 56 unchanged - 2 fixed = 56 total (was 58) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 37s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 38s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808733/YARN-5210-YARN-2928.01.patch
 |
| JIRA Issue | YARN-5210 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  

[jira] [Commented] (YARN-5176) More test cases for queuing of containers at the NM

2016-06-07 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319139#comment-15319139
 ] 

Arun Suresh commented on YARN-5176:
---

Thanks for the patch [~kkaranasos].. and the thorough testing..

The original intent of the {{TestQueuingContainerManager}} being a subclass of 
{{TestContainerManager}} was, I believe, to run all its testcases as well. But 
I understand the rational for changing that hierarchy, given the different 
configurations required for the specific testcases addresses in this patch.

I propose we have a parametrized testcase similar to what is done for the 
Schedulers to test the common scenarios for both ContainerManagers. Can we have 
a JIRA to track that ?

A minor nit otherwise is that the current test failure seems to not be a timing 
issue. If that is a valid state, maybe you should assert that the final state 
can be either *DONE* or *CONTAINER_CLEANEDUP_AFTER_KILL*

+1 pending the above..

> More test cases for queuing of containers at the NM
> ---
>
> Key: YARN-5176
> URL: https://issues.apache.org/jira/browse/YARN-5176
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5176.001.patch, YARN-5176.002.patch
>
>
> Extending {{TestQueuingContainerManagerImpl}} to include more test cases for 
> the queuing of containers at the NM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5211) Supporting "priorities" in the ReservationSystem

2016-06-07 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-5211:
--

 Summary: Supporting "priorities" in the ReservationSystem
 Key: YARN-5211
 URL: https://issues.apache.org/jira/browse/YARN-5211
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Carlo Curino
Assignee: Carlo Curino


The ReservationSystem currently has an implicit FIFO priority. This JIRA tracks 
effort to generalize this to arbitrary priority. This is non-trivial as the 
greedy nature of our ReservationAgents might need to be revisited if not enough 
space if found for late-arriving but higher priority reservations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5208) Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens TestAMRMTokens tests with hadoop.security.token.service.use_ip enabled

2016-06-07 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319123#comment-15319123
 ] 

Sunil G commented on YARN-5208:
---

Using default up resolver was used earlier and it might be enough for client 
tests. With first patch, many of the regularly failing test classes are 
passing. But as you mentioned, there will be impact in HA or some other test 
cases where we use host name through conf. I think such cases can be fixed. 

> Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens 
> TestAMRMTokens tests with hadoop.security.token.service.use_ip enabled
> 
>
> Key: YARN-5208
> URL: https://issues.apache.org/jira/browse/YARN-5208
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: test
> Attachments: 0001-YARN-5208.patch, 0002-YARN-5208.patch
>
>
> All YARN test cases are running with *hadoop.security.token.service.use_ip* 
> disabled. As a result few tests {{TestAMRMClient TestNMClient TestYarnClient 
> TestClientRMTokens TestAMRMTokens}} cases are consistently failing because of 
> unable to resolve hostname(see HADOOP-12687 YARN-4306 YARN-4318)
> I would suggest to run tests with *hadoop.security.token.service.use_ip* 
> enabled by default. And for the HA test cases which require mandatory 
> disabling , change test cases as required by setting 
> {code}
> conf.setBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP, false);
> SecurityUtil.setConfiguration(conf);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319104#comment-15319104
 ] 

Varun Saxena commented on YARN-5210:


Tested and verified the fixes made in patch above.
As can be seen below for DS_APP_ATTEMPT, created time is now returned.
{code}
  {
"metrics": [ ],
"events": [ ],
"type": "DS_APP_ATTEMPT",
"id": "appattempt_1465311876353_0008_01",
"createdtime": 1465322936522,
"isrelatedto": { },
"relatesto": { },
"info": {
  "UID": 
"yarn-cluster!application_1465311876353_0008!DS_APP_ATTEMPT!appattempt_1465311876353_0008_01"
},
"configs": { }
  }
{code}

Moreover, createdtime is also returned for DS_CONTAINER entity and 
DS_CONTAINER_START event is also present.
{code}
  {
"metrics": [ ],
"events": [
  {
"id": "DS_CONTAINER_END",
"timestamp": 1465322940710,
"info": {
  "Exit Status": 0,
  "State": "COMPLETE"
}
  },
  {
"id": "DS_CONTAINER_START",
"timestamp": 1465322939739,
"info": {
  "Node": "192.168.0.102:64318",
  "Resources": ""
}
  }
],
"type": "DS_CONTAINER",
"id": "container_e77_1465311876353_0008_01_03",
"createdtime": 1465322939739,
"isrelatedto": { },
"relatesto": { },
"info": {
  "UID": 
"yarn-cluster!application_1465311876353_0008!DS_CONTAINER!container_e77_1465311876353_0008_01_03"
},
"configs": { }
  }
{code}

Ran TestDistributedShell locally and it passed.

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn 
> means that this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {

[jira] [Commented] (YARN-5208) Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens TestAMRMTokens tests with hadoop.security.token.service.use_ip enabled

2016-06-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319101#comment-15319101
 ] 

Hadoop QA commented on YARN-5208:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 29 
new + 184 unchanged - 2 fixed = 213 total (was 186) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 26s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 2s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
|   | hadoop.yarn.client.api.impl.TestDistributedScheduling |
|   | hadoop.yarn.client.TestGetGroups |
|   | hadoop.yarn.client.cli.TestLogsCLI |
| Timed out junit tests | org.apache.hadoop.yarn.client.cli.TestYarnCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808717/0002-YARN-5208.patch |
| JIRA Issue | YARN-5208 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1586f991c8e3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c14c1b2 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Description: 
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn means 
that this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

* Created time is not reported from distributed shell for both DS_CONTAINER and 
DS_APP_ATTEMPT entities. 
As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
createdtime in response.
{code}
  [
{
  "metrics": [ ],
  "events": [ ],
  "type": "DS_APP_ATTEMPT",
  "id": "appattempt_1465246237936_0003_01",
  "isrelatedto": { },
  "relatesto": { },
  "info": {
"UID": 
"yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
  },
  "configs": { }
}
  ]
{code}
As can be seen from response received upon querying a DS_CONTAINER entity we 
can see that createdtime is not present and DS_CONTAINER_START is not present 
either(due to NPE pointed above).
{code}
  {
"metrics": [ ],
"events": [
  {
"id": "DS_CONTAINER_END",
"timestamp": 1465314587480,
"info": {
  "Exit Status": 0,
  "State": "COMPLETE"
}
  }
],
"type": "DS_CONTAINER",
"id": "container_e77_1465311876353_0003_01_02",
"isrelatedto": { },
"relatesto": { },
"info": {
  "UID": 
"yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
},
"configs": { }
  }
{code}

  was:
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 

[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319096#comment-15319096
 ] 

Varun Saxena commented on YARN-5210:


I have submitted a patch to fix the issues.
Marked it for 1st milestone too. Should be easy to review and get in.

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
> this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> "yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
> },
> "configs": { }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Attachment: YARN-5210-YARN-2928.01.patch

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
> this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> "yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
> },
> "configs": { }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Description: 
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

* Created time is not reported from distributed shell for both DS_CONTAINER and 
DS_APP_ATTEMPT entities. 
As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
createdtime in response.
{code}
  [
{
  "metrics": [ ],
  "events": [ ],
  "type": "DS_APP_ATTEMPT",
  "id": "appattempt_1465246237936_0003_01",
  "isrelatedto": { },
  "relatesto": { },
  "info": {
"UID": 
"yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
  },
  "configs": { }
}
  ]
{code}
As can be seen from response received upon querying a DS_CONTAINER entity we 
can see that createdtime is not present and DS_CONTAINER_START is not present 
either(due to NPE pointed above).
{code}
  {
"metrics": [ ],
"events": [
  {
"id": "DS_CONTAINER_END",
"timestamp": 1465314587480,
"info": {
  "Exit Status": 0,
  "State": "COMPLETE"
}
  }
],
"type": "DS_CONTAINER",
"id": "container_e77_1465311876353_0003_01_02",
"isrelatedto": { },
"relatesto": { },
"info": {
  "UID": 
"yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
},
"configs": { }
  }
{code}

  was:
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 

[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Description: 
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

* Created time is not reported from distributed shell for both DS_CONTAINER and 
DS_APP_ATTEMPT entities. 
As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
createdtime in response.
{code}
  [
{
  "metrics": [ ],
  "events": [ ],
  "type": "DS_APP_ATTEMPT",
  "id": "appattempt_1465246237936_0003_01",
  "isrelatedto": { },
  "relatesto": { },
  "info": {
"UID": 
"yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
  },
  "configs": { }
}
  ]
{code}

As can be seen from response received upon querying a DS_CONTAINER entity we 
can see that createdtime is not present and DS_CONTAINER_START is not present 
either(due to NPE pointed above).
{code}
  {
"metrics": [ ],
"events": [
  {
"id": "DS_CONTAINER_END",
"timestamp": 1465314587480,
"info": {
  "Exit Status": 0,
  "State": "COMPLETE"
}
  }
],
"type": "DS_CONTAINER",
"id": "container_e77_1465311876353_0003_01_02",
"isrelatedto": { },
"relatesto": { },
"info": {
  "UID": 
"yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
},
"configs": { }
  }
{code}

  was:
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 

[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Description: 
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

* Created time is not reported from distributed shell for both DS_CONTAINER and 
DS_APP_ATTEMPT entities.
{code}
  [
{
  "metrics": [ ],
  "events": [ ],
  "type": "DS_APP_ATTEMPT",
  "id": "appattempt_1465246237936_0003_01",
  "isrelatedto": { },
  "relatesto": { },
  "info": {
"UID": 
"yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
  },
  "configs": { }
}
  ]
{code}

  was:
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 

[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Description: 
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

* Created time is not reported from distributed shell for both DS_CONTAINER and 
DS_APP_ATTEMPT entities. 
As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
createdtime in response.
{code}
  [
{
  "metrics": [ ],
  "events": [ ],
  "type": "DS_APP_ATTEMPT",
  "id": "appattempt_1465246237936_0003_01",
  "isrelatedto": { },
  "relatesto": { },
  "info": {
"UID": 
"yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
  },
  "configs": { }
}
  ]
{code}

  was:
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 

[jira] [Commented] (YARN-4525) Fix bug in RLESparseResourceAllocation.getRangeOverlapping(...)

2016-06-07 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319064#comment-15319064
 ] 

Carlo Curino commented on YARN-4525:


Thanks to [~imenache] for spotting the issue and proposing the initial patch, 
and to [~asuresh] for reviewing and committing. 

> Fix bug in RLESparseResourceAllocation.getRangeOverlapping(...)
> ---
>
> Key: YARN-4525
> URL: https://issues.apache.org/jira/browse/YARN-4525
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ishai Menache
>Assignee: Ishai Menache
> Fix For: 2.8.0
>
> Attachments: YARN-4525.1.patch, YARN-4525.2.patch, YARN-4525.patch
>
>
> One of our tests detected a corner case in getRangeOverlapping: When the 
> RLESparseResourceAllocation object is a result of a merge operation, the 
> underlying map is a "view" within some range. If  'end' is outside that 
> range, headMap(..) throws an uncaught exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5185) StageAllocaterGreedyRLE: Fix NPE in corner case

2016-06-07 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319062#comment-15319062
 ] 

Carlo Curino commented on YARN-5185:


Thanks for reviewing and committing [~asuresh]

> StageAllocaterGreedyRLE: Fix NPE in corner case 
> 
>
> Key: YARN-5185
> URL: https://issues.apache.org/jira/browse/YARN-5185
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Fix For: 2.8.0
>
> Attachments: YARN-5185.1.patch
>
>
> If the plan has only one interval, and the reservation exactly overlap we 
> will have a null from partialMap.higherKey() that we should guard against.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Labels: yarn-2928-1st-milestone  (was: )

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
> this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5199) Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream in NMWebServices#getLogs

2016-06-07 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319056#comment-15319056
 ] 

Junping Du commented on YARN-5199:
--

The latest patch (003) LGTM. +1. Will commit it shortly.

> Close LogReader in in AHSWebServices#getStreamingOutput and FileInputStream 
> in NMWebServices#getLogs
> 
>
> Key: YARN-5199
> URL: https://issues.apache.org/jira/browse/YARN-5199
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5199.1.patch, YARN-5199.2.patch, YARN-5199.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Description: 
Found a couple of issues while testing ATSv2.
* There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

* Created time is not reported from distributed shell for both DS_CONTAINER and 
DS_APP_ATTEMPT entities.

  was:
Found a couple of issues while testing ATSv2.
# There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 

[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Description: 
Found a couple of issues while testing ATSv2.
# There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

# Created time is not reported from distributed shell for both DS_CONTAINER and 
DS_APP_ATTEMPT entities.

  was:
Found a couple of issues while testing ATSv2.
# There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 

[jira] [Updated] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5210:
---
Description: 
Found a couple of issues while testing ATSv2.
# There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
this event is not published.
{noformat}
2016-06-07 23:19:00,020 
[org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
exception is thrown from onContainerStarted for Container 
container_e77_1465311876353_0007_01_02
java.lang.NullPointerException
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
at 
org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at 
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at 
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
at 
org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>
> Found a couple of issues while testing ATSv2.
> # There is a NPE while publishing DS_CONTAINER_START_EVENT which means that 
> this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> 

[jira] [Created] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-07 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-5210:
--

 Summary: NPE in Distributed Shell while publishing 
DS_CONTAINER_START event and other miscellaneous issues
 Key: YARN-5210
 URL: https://issues.apache.org/jira/browse/YARN-5210
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: YARN-2928
Reporter: Varun Saxena
Assignee: Varun Saxena






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4464) default value of yarn.resourcemanager.state-store.max-completed-applications should lower.

2016-06-07 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319024#comment-15319024
 ] 

Daniel Templeton commented on YARN-4464:


Ping, [~Naganarasimha Garla], [~jianhe], [~vinodkv].  Can we get consensus so 
we can get this change in for 3.0?

> default value of yarn.resourcemanager.state-store.max-completed-applications 
> should lower.
> --
>
> Key: YARN-4464
> URL: https://issues.apache.org/jira/browse/YARN-4464
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: resourcemanager
>Reporter: KWON BYUNGCHANG
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: YARN-4464.001.patch, YARN-4464.002.patch, 
> YARN-4464.003.patch, YARN-4464.004.patch
>
>
> my cluster has 120 nodes.
> I configured RM Restart feature.
> {code}
> yarn.resourcemanager.recovery.enabled=true
> yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
> yarn.resourcemanager.fs.state-store.uri=/system/yarn/rmstore
> {code}
> unfortunately I did not configure 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> so that property configured default value 10,000.
> I have restarted RM due to changing another configuartion.
> I expected that RM restart immediately.
> recovery process was very slow.  I have waited about 20min.  
> realize missing 
> {{yarn.resourcemanager.state-store.max-completed-applications}}.
> its default value is very huge.  
> need to change lower value or document notice on [RM Restart 
> page|http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest

2016-06-07 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15319002#comment-15319002
 ] 

Carlo Curino commented on YARN-5124:


[~asuresh] I see what you tried to do with the typedefs like {{class 
ExecutionTypeMap extends HashMap}}. This is not 
what I was after with my comment, and 
to the best of my understanding not very java as a coding style. 

I think the code structure would be much cleaner if you wrap the entire stack 
of maps of maps you have in a separate class, so that you can make the physical 
representation independent 
from the logical functionality, and delegate to that class much of the accesses 
etc.. This would also move the 4-way for loops in a class that is only handling 
the data representation and 
making the {{AMRMClientImpl}} a little simpler. 

> Modify AMRMClient to set the ExecutionType in the ResourceRequest
> -
>
> Key: YARN-5124
> URL: https://issues.apache.org/jira/browse/YARN-5124
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5124.001.patch, YARN-5124.002.patch, 
> YARN-5124.003.patch, YARN-5124.004.patch, YARN-5124.005.patch, 
> YARN-5124.006.patch, YARN-5124.008.patch, YARN-5124.009.patch, 
> YARN-5124.010.patch, YARN-5124.011.patch, 
> YARN-5124_YARN-5180_combined.007.patch, YARN-5124_YARN-5180_combined.008.patch
>
>
> Currently the {{ContainerRequest}} allows the AM to set the {{ExecutionType}} 
> in the AMRMClient, but it is not being set in the actual {{ResourceRequest}} 
> that is sent to the RM 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >