[jira] [Updated] (YARN-6360) Prevent FS state dump logger from cramming other log files

2017-03-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6360:
---
Attachment: YARN-6360.001.patch

> Prevent FS state dump logger from cramming other log files
> --
>
> Key: YARN-6360
> URL: https://issues.apache.org/jira/browse/YARN-6360
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6360.001.patch
>
>
> FS could dump states to multiple files if its logger inherit parents' 
> appender. We should prevent that in case the state dump logger may cram other 
> log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6360) Prevent FS state dump logger from cramming other log files

2017-03-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6360:
---
Summary: Prevent FS state dump logger from cramming other log files  (was: 
Prevent FS state dump logger from inheriting parents' appenders.)

> Prevent FS state dump logger from cramming other log files
> --
>
> Key: YARN-6360
> URL: https://issues.apache.org/jira/browse/YARN-6360
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> FS could dump states to multiple files if its logger inherit parents' 
> appender. We should prevent that in case the state dump logger may cram other 
> log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929367#comment-15929367
 ] 

Hadoop QA commented on YARN-6217:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
20s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6217 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859222/YARN-6217.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1ccd7576e29b 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7536815 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15307/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15307/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-6217.000.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonF

[jira] [Commented] (YARN-6345) Add container tags to resource requests

2017-03-16 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929341#comment-15929341
 ] 

Arun Suresh commented on YARN-6345:
---

[~vinodkv], I agree these looks similar to AllocationTags as mentioned in 
YARN-4902. If it is just the matter of naming, I actually prefer allocationTags 
to containerTags honestly.

I guess instead of waiting for the new API, would you be open to creating a 
field inside the {{AllocateRequest}} itself, rather than the 
{{ResourceRequest}} ? Then the tags could apply to all the {{ResourceRequest}} 
contained in the {{AllocateRequest}}.

The returned container should similarly also have a tag field. So that the AM 
can match a container against an allocationTag.

Thoughts?

> Add container tags to resource requests
> ---
>
> Key: YARN-6345
> URL: https://issues.apache.org/jira/browse/YARN-6345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> This JIRA introduces the notion of container tags.
> When an application submits container requests, it is allowed to attach to 
> them a set of string tags. The corresponding resource requests will also 
> carry these tags.
> For example, a container that will be used for running an HBase Master can be 
> marked with the tag "hb-m". Another one belonging to a ZooKeeper application, 
> can be marked as "zk".
> Through container tags, we will be able to express constraints that refer to 
> containers with the given tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-16 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi reassigned YARN-6217:


Assignee: Miklos Szegedi

> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Miklos Szegedi
> Attachments: YARN-6217.000.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonFull has 
> only a one second timeout.  If the test machine hits an I/O hiccup it can 
> fail.  The test timeout is too aggressive, and I question whether this test 
> even needs an explicit timeout specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6217) TestLocalCacheDirectoryManager test timeout is too aggressive

2017-03-16 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6217:
-
Attachment: YARN-6217.000.patch

Attaching suggested patch.

> TestLocalCacheDirectoryManager test timeout is too aggressive
> -
>
> Key: YARN-6217
> URL: https://issues.apache.org/jira/browse/YARN-6217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
> Attachments: YARN-6217.000.patch
>
>
> TestLocalCacheDirectoryManager#testDirectoryStateChangeFromFullToNonFull has 
> only a one second timeout.  If the test machine hits an I/O hiccup it can 
> fail.  The test timeout is too aggressive, and I question whether this test 
> even needs an explicit timeout specified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6319) race condition between deleting app dir and deleting container dir

2017-03-16 Thread Hong Zhiguo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929330#comment-15929330
 ] 

Hong Zhiguo commented on YARN-6319:
---

[~haibochen], the post-callback will not linearize container cleanup. They are 
still parellel.

> race condition between deleting app dir and deleting container dir
> --
>
> Key: YARN-6319
> URL: https://issues.apache.org/jira/browse/YARN-6319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Hong Zhiguo
>Assignee: Hong Zhiguo
>
> Last container (on one node) of one app complete
> |--> triggers async deletion of container dir (container cleanup)
> |--> triggers async deletion of app dir (app cleanup)
> For LCE, deletion is done by container-executor. The "app cleanup" lists 
> sub-dir (step 1), and then unlink items one by one(step 2).   If a file is 
> deleted by "container cleanup" between step 1 and step2, it'll report below 
> error and breaks the deletion.
> {code}
> ContainerExecutor: Couldn't delete file 
> $LOCAL/usercache/$USER/appcache/application_1481785469354_353539/container_1481785469354_353539_01_28/$FILE
>  - No such file or directory
> {code}
> This app dir then escape the cleanup. And that's why we always have many app 
> dirs left there.
> solution 1: just ignore the error without breaking in 
> container-executor.c::delete_path()
> solution 2: use a lock to serialize the cleanup of same app dir.
> solution 3: backoff and retry on error
> Comments are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6229) resource manager web UI display BUG

2017-03-16 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929318#comment-15929318
 ] 

Miklos Szegedi commented on YARN-6229:
--

[~gehaijiang], are you using fair scheduler or capacity scheduler?

> resource manager web UI  display  BUG
> -
>
> Key: YARN-6229
> URL: https://issues.apache.org/jira/browse/YARN-6229
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
> Environment: hadoop 2.7.1
>Reporter: gehaijiang
> Attachments: rs.png
>
>
> resourcemanager web UI display  bug:
> Memory Used  -3.44TB
> Containers Running -2607
> VCores Used -2607
> Lost Nodes  173
> These numbers are not correct。
> Cluster Metrics
> Apps Submitted | Apps Pending | Apps Running |Apps Completed  | 
> Containers Running | Memory Used | Memory Total | Memory Reserved |VCores 
> Used |  VCores Total | VCores Reserved | Active Nodes   | Decommissioned 
> Nodes | Lost Nodes | Unhealthy Nodes | Rebooted Nodes
> 3027432   0   20  3027412 -2607   -3.44TB 9.70TB  0B  -2607   
> 72400   181 0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6242) [Umbrella] Miscellaneous Scheduler Performance Improvements

2017-03-16 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929309#comment-15929309
 ] 

Miklos Szegedi commented on YARN-6242:
--

[~leftnoteasy], YARN-6361is a FS performance issue but not related to CS, I 
think. Should it be added to the list?

> [Umbrella] Miscellaneous Scheduler Performance Improvements
> ---
>
> Key: YARN-6242
> URL: https://issues.apache.org/jira/browse/YARN-6242
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>
> There're some performance issues of scheduler. YARN-3091 is majorly targeted 
> to solve locking issues of scheduler, Let's use this JIRA to track 
> non-locking issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6361) FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-03-16 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6361:
-
Summary: FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high 
with big queues  (was: FSLeafQueue.fetchAppsWithDemand CPU usage is high with 
big queues)

> FairScheduler: FSLeafQueue.fetchAppsWithDemand CPU usage is high with big 
> queues
> 
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Priority: Minor
> Attachments: dispatcherthread.png, threads.png
>
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6304) Skip rm.transitionToActive call to RM if RM is already active.

2017-03-16 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929308#comment-15929308
 ] 

Karthik Kambatla commented on YARN-6304:


This seems benign to me. +1. 

> Skip rm.transitionToActive call to RM if RM is already active. 
> ---
>
> Key: YARN-6304
> URL: https://issues.apache.org/jira/browse/YARN-6304
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-6304.0001.patch
>
>
> When elector elects RM to become active, even though RM is already in ACTIVE 
> state AdminService does refresh on following configurations. 
> # refreshAdminAcls 
> # refreshAll to update the configurations.
> But ideally even these operations are need NOT to be done and can be skipped 
> refreshing configurations on ACTIVE RM. However admin executes refresh 
> command separately if there is any config changes to be done.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6361) FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-03-16 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6361:
-
Attachment: dispatcherthread.png

> FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues
> -
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Priority: Minor
> Attachments: dispatcherthread.png, threads.png
>
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5179) Issue of CPU usage of containers

2017-03-16 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929307#comment-15929307
 ] 

Karthik Kambatla commented on YARN-5179:


[~miklos.szeg...@cloudera.com] was looking into a similar issue very recently. 
Miklos - can you check if the proposals fix the issue that you were running 
into? 

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not get right CPU usage based vcore if vcores != physical 
> cores. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6361) FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-03-16 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6361:
-
Attachment: threads.png

> FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues
> -
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Priority: Minor
> Attachments: threads.png
>
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6361) FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-03-16 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6361:
-
Description: FSLeafQueue.fetchAppsWithDemand sorts the applications by the 
current policy. Most of the time is spent in FairShareComparator.compare. We 
could improve this by doing the calculations outside the sort loop {{(O\(n\))}} 
and we sorted by a fixed number inside instead {{O(n*log\(n\))}}.  (was: 
FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
Most of the time is spent in FairShareComparator.compare. We could improve this 
by doing the calculations outside the sort loop (O(n)) and we sorted by a fixed 
number inside instead O(n*log(n)).)

> FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues
> -
>
> Key: YARN-6361
> URL: https://issues.apache.org/jira/browse/YARN-6361
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Priority: Minor
>
> FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
> Most of the time is spent in FairShareComparator.compare. We could improve 
> this by doing the calculations outside the sort loop {{(O\(n\))}} and we 
> sorted by a fixed number inside instead {{O(n*log\(n\))}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6361) FSLeafQueue.fetchAppsWithDemand CPU usage is high with big queues

2017-03-16 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-6361:


 Summary: FSLeafQueue.fetchAppsWithDemand CPU usage is high with 
big queues
 Key: YARN-6361
 URL: https://issues.apache.org/jira/browse/YARN-6361
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Miklos Szegedi
Priority: Minor


FSLeafQueue.fetchAppsWithDemand sorts the applications by the current policy. 
Most of the time is spent in FairShareComparator.compare. We could improve this 
by doing the calculations outside the sort loop (O(n)) and we sorted by a fixed 
number inside instead O(n*log(n)).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6244) Introduce AtomicResource in ResourceUsage to avoid read-write lock.

2017-03-16 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929299#comment-15929299
 ] 

Miklos Szegedi commented on YARN-6244:
--

[~leftnoteasy], I have a question regarding AtomicResource. Is the fact that 
{{setMemorySize}} and {{setVirtualCores}} can be called one after another 
violating the atomicity? Would it be a good idea to throw an exception, if they 
are used within AtomicResource?

> Introduce AtomicResource in ResourceUsage to avoid read-write lock.
> ---
>
> Key: YARN-6244
> URL: https://issues.apache.org/jira/browse/YARN-6244
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, resourcemanager, 
> scheduler
>Reporter: Wangda Tan
> Attachments: YARN-6244.preliminary.0.patch
>
>
> While doing SLS tests of YARN-5139, I found when multiple threads are doing 
> scheduling at the same time, read/write lock of ResourceUsage becomes 
> bottleneck. This could be improved.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6360) Prevent FS state dump logger from inheriting parents' appenders.

2017-03-16 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6360:
--

 Summary: Prevent FS state dump logger from inheriting parents' 
appenders.
 Key: YARN-6360
 URL: https://issues.apache.org/jira/browse/YARN-6360
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 3.0.0-alpha2, 2.9.0
Reporter: Yufei Gu
Assignee: Yufei Gu


FS could dump states to multiple files if its logger inherit parents' appender. 
We should prevent that in case the state dump logger may cram other log files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6279) Scheduler rest api JSON is not providing all child queues names

2017-03-16 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929266#comment-15929266
 ] 

Miklos Szegedi edited comment on YARN-6279 at 3/17/17 1:03 AM:
---

[~ashishdoneriya], based on the JSON and the version did you run into 
YARN-2336? It looks like it is fixed in branch 2.8. Just in case there is a fix 
of that jira, YARN-3957.


was (Author: miklos.szeg...@cloudera.com):
[~ashishdoneriya], based on the JSON and the version did you run into 
YARN-2336? It looks like it is fixed in branch 2.8.

> Scheduler rest api JSON is not providing all child queues names
> ---
>
> Key: YARN-6279
> URL: https://issues.apache.org/jira/browse/YARN-6279
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, fairscheduler, scheduler
>Affects Versions: 2.4.1
> Environment: Ubuntu 14.04, 7.7 GiB, i5, 3.4GHz x 4, 64-bit
>Reporter: Ashish Doneriya
>
> When I hit rest api /ws/v1/cluster/scheduler to get the JSON file. Its gave 
> me all child queues information, But it didn't gave me all information about 
> child queues of child queues. It displays information of only one sub child 
> queue. While in xml format there is no such problem.
> I'm providing the xml and json outputs..
> 
> {"scheduler":{"schedulerInfo":{"type":"fairScheduler","rootQueue":{"maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":8192,"vCores":8},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root","schedulingPolicy":"fair","childQueues":[{"maxApps":20,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":5283,"vCores":2},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":5283,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering","schedulingPolicy":"fair","childQueues":{"type":["fairSchedulerLeafQueueInfo"],"maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.Development","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0},"childQueues":{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.TESTING","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}},{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2909,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.default","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}]
> 
> 
>   http://www.w3.org/2001/XMLSchema-instance"; 
> xsi:type="fairScheduler">
>   
>   2147483647
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   8192
>   8
>   
>   root
>   fair
>   
>   20
>   
>   1024
>   1
>   
>   
>   5283
>   2
>   
>   
>   0
>   0
>   
>   
>   5283
>   0
>   
>   
>   8192
>   8
>   
>   root.Engineering
>   fair
>

[jira] [Commented] (YARN-6279) Scheduler rest api JSON is not providing all child queues names

2017-03-16 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929266#comment-15929266
 ] 

Miklos Szegedi commented on YARN-6279:
--

[~ashishdoneriya], based on the JSON and the version did you run into 
YARN-2336? It looks like it is fixed in branch 2.8.

> Scheduler rest api JSON is not providing all child queues names
> ---
>
> Key: YARN-6279
> URL: https://issues.apache.org/jira/browse/YARN-6279
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, fairscheduler, scheduler
>Affects Versions: 2.4.1
> Environment: Ubuntu 14.04, 7.7 GiB, i5, 3.4GHz x 4, 64-bit
>Reporter: Ashish Doneriya
>
> When I hit rest api /ws/v1/cluster/scheduler to get the JSON file. Its gave 
> me all child queues information, But it didn't gave me all information about 
> child queues of child queues. It displays information of only one sub child 
> queue. While in xml format there is no such problem.
> I'm providing the xml and json outputs..
> 
> {"scheduler":{"schedulerInfo":{"type":"fairScheduler","rootQueue":{"maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":8192,"vCores":8},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root","schedulingPolicy":"fair","childQueues":[{"maxApps":20,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":5283,"vCores":2},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":5283,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering","schedulingPolicy":"fair","childQueues":{"type":["fairSchedulerLeafQueueInfo"],"maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.Development","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0},"childQueues":{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":1024,"vCores":1},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2642,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.Engineering.TESTING","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}},{"type":"fairSchedulerLeafQueueInfo","maxApps":2147483647,"minResources":{"memory":0,"vCores":0},"maxResources":{"memory":8192,"vCores":8},"usedResources":{"memory":0,"vCores":0},"fairResources":{"memory":2909,"vCores":0},"clusterResources":{"memory":8192,"vCores":8},"queueName":"root.default","schedulingPolicy":"fair","numPendingApps":0,"numActiveApps":0}]
> 
> 
>   http://www.w3.org/2001/XMLSchema-instance"; 
> xsi:type="fairScheduler">
>   
>   2147483647
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   0
>   0
>   
>   
>   8192
>   8
>   
>   
>   8192
>   8
>   
>   root
>   fair
>   
>   20
>   
>   1024
>   1
>   
>   
>   5283
>   2
>   
>   
>   0
>   0
>   
>   
>   5283
>   0
>   
>   
>   8192
>   8
>   
>   root.Engineering
>   fair
>xsi:type="fairSchedulerLeafQueueInfo">
>   2147483647
>   
>   1024
>   1
>

[jira] [Commented] (YARN-6345) Add container tags to resource requests

2017-03-16 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929255#comment-15929255
 ] 

Miklos Szegedi commented on YARN-6345:
--

[~subru], [~kkaranasos], yes it does. Thank you for the clarification.

> Add container tags to resource requests
> ---
>
> Key: YARN-6345
> URL: https://issues.apache.org/jira/browse/YARN-6345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> This JIRA introduces the notion of container tags.
> When an application submits container requests, it is allowed to attach to 
> them a set of string tags. The corresponding resource requests will also 
> carry these tags.
> For example, a container that will be used for running an HBase Master can be 
> marked with the tag "hb-m". Another one belonging to a ZooKeeper application, 
> can be marked as "zk".
> Through container tags, we will be able to express constraints that refer to 
> containers with the given tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6357) Implement TimelineCollector#putEntitiesAsync

2017-03-16 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929254#comment-15929254
 ] 

Joep Rottinghuis commented on YARN-6357:


bq. I can file a separate bug from this one dealing with exception handling to 
tackle the sync vs async nature.
Sorry, copy paste from previous bug YARN-5269. This _is_ the separate jira.

[~varun_saxena] you were right in the previous call. I was thinking about the 
writer side where flush works correctly, you were thinking one level up where 
flush wasn't appropriately called.

> Implement TimelineCollector#putEntitiesAsync
> 
>
> Key: YARN-6357
> URL: https://issues.apache.org/jira/browse/YARN-6357
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Haibo Chen
>  Labels: yarn-5355-merge-blocker
>
> As discovered and discussed in YARN-5269 the 
> TimelineCollector#putEntitiesAsync method is currently not implemented and 
> TimelineCollector#putEntities is asynchronous.
> TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync 
> correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... 
> with the correct argument. This argument does seem to make it into the 
> params, and on the server side TimelineCollectorWebService#putEntities 
> correctly pulls the async parameter from the rest call. See line 156:
> {code}
> boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
> {code}
> However, this is where the problem starts. It simply calls 
> TimelineCollector#putEntities and ignores the value of isAsync. It should 
> instead have called TimelineCollector#putEntitiesAsync, which is currently 
> not implemented.
> putEntities should call putEntitiesAsync and then after that call 
> writer.flush()
> The fact that we flush on close and we flush periodically should be more of a 
> concern of avoiding data loss; close in case sync is never called and the 
> periodic flush to guard against having data from slow writers get buffered 
> for a long time and expose us to risk of loss in case the collector crashes 
> with data in its buffers. Size-based flush is a different concern to avoid 
> blowing up memory footprint.
> The spooling behavior is also somewhat separate.
> We have two separate methods on our API putEntities and putEntitiesAsync and 
> they should have different behavior beyond waiting for the request to be sent.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6357) Implement TimelineCollector#putEntitiesAsync

2017-03-16 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-6357:
---
Description: 
As discovered and discussed in YARN-5269 the TimelineCollector#putEntitiesAsync 
method is currently not implemented and TimelineCollector#putEntities is 
asynchronous.

TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync 
correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... with 
the correct argument. This argument does seem to make it into the params, and 
on the server side TimelineCollectorWebService#putEntities correctly pulls the 
async parameter from the rest call. See line 156:
{code}
boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
{code}
However, this is where the problem starts. It simply calls 
TimelineCollector#putEntities and ignores the value of isAsync. It should 
instead have called TimelineCollector#putEntitiesAsync, which is currently not 
implemented.
putEntities should call putEntitiesAsync and then after that call writer.flush()
The fact that we flush on close and we flush periodically should be more of a 
concern of avoiding data loss; close in case sync is never called and the 
periodic flush to guard against having data from slow writers get buffered for 
a long time and expose us to risk of loss in case the collector crashes with 
data in its buffers. Size-based flush is a different concern to avoid blowing 
up memory footprint.
The spooling behavior is also somewhat separate.
We have two separate methods on our API putEntities and putEntitiesAsync and 
they should have different behavior beyond waiting for the request to be sent.

  was:
As discovered and discussed in YARN-5269 the TimelineCollector#putEntitiesAsync 
method is currently not implemented and TimelineCollector#putEntities is 
asynchronous.

TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync 
correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... with 
the correct argument. This argument does seem to make it into the params, and 
on the server side TimelineCollectorWebService#putEntities correctly pulls the 
async parameter from the rest call. See line 156:
{code}
boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
{code}
However, this is where the problem starts. It simply calls 
TimelineCollector#putEntities and ignores the value of isAsync. It should 
instead have called TimelineCollector#putEntitiesAsync, which is currently not 
implemented.
putEntities should call putEntitiesAsync and then after that call writer.flush()
The fact that we flush on close and we flush periodically should be more of a 
concern of avoiding data loss; close in case sync is never called and the 
periodic flush to guard against having data from slow writers get buffered for 
a long time and expose us to risk of loss in case the collector crashes with 
data in its buffers. Size-based flush is a different concern to avoid blowing 
up memory footprint.
The spooling behavior is also somewhat separate.
We have two separate methods on our API putEntities and putEntitiesAsync and 
they should have different behavior beyond waiting for the request to be sent. 
I can file a separate bug from this one dealing with exception handling to 
tackle the sync vs async nature. During the meeting today I was thinking about 
the HBase writer that has a flush, which definitely blocks until data is 
flushed to HBase (ignoring the spooling for the moment).


> Implement TimelineCollector#putEntitiesAsync
> 
>
> Key: YARN-6357
> URL: https://issues.apache.org/jira/browse/YARN-6357
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Haibo Chen
>  Labels: yarn-5355-merge-blocker
>
> As discovered and discussed in YARN-5269 the 
> TimelineCollector#putEntitiesAsync method is currently not implemented and 
> TimelineCollector#putEntities is asynchronous.
> TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync 
> correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... 
> with the correct argument. This argument does seem to make it into the 
> params, and on the server side TimelineCollectorWebService#putEntities 
> correctly pulls the async parameter from the rest call. See line 156:
> {code}
> boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
> {code}
> However, this is where the problem starts. It simply calls 
> TimelineCollector#putEntities and ignores the value of isAsync. It should 
> instead have called TimelineCollector#putEntitiesAsync, which is currently 
> not implemented.
> putEntities should call p

[jira] [Commented] (YARN-6359) TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929221#comment-15929221
 ] 

Hadoop QA commented on YARN-6359:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 
30s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6359 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859196/YARN-6359.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d6a187209ff7 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c04fb35 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15306/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15306/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition
> 
>
> Key: YARN-6359
> URL: https://issues.apache.org/jira/browse/YARN-6359
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-63

[jira] [Commented] (YARN-6339) Improve performance for createAndGetApplicationReport

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929205#comment-15929205
 ] 

Hadoop QA commented on YARN-6339:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 148 unchanged - 1 fixed = 150 total (was 149) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 42m 
19s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6339 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859186/YARN-6339.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ab1bd35c2c4a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4812518 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15305/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15305/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hado

[jira] [Updated] (YARN-6359) TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition

2017-03-16 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-6359:

Attachment: YARN-6359.001.patch

Despite running it over 1000 times, I wasn't able to reproduce this in my 
environment.  However, it seems likely that the problem is due to a race 
condition between when the metric for the apps killed is checked versus when 
that metrics is updated.  The 001 patch fixes this by adding some looping code, 
with a timeout, similar to what {{MockRM#waitForState}} does.  I've verified 
that this helps solve the problem by (temporarily) adding in a sleep to the 
metrics updating code.

> TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition
> 
>
> Key: YARN-6359
> URL: https://issues.apache.org/jira/browse/YARN-6359
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-6359.001.patch
>
>
> We've seen (very rarely) a test failure in 
> {{TestRM#testApplicationKillAtAcceptedState}}
> {noformat}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRM.testApplicationKillAtAcceptedState(TestRM.java:645)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6359) TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition

2017-03-16 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-6359:
---

 Summary: TestRM#testApplicationKillAtAcceptedState fails rarely 
due to race condition
 Key: YARN-6359
 URL: https://issues.apache.org/jira/browse/YARN-6359
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Affects Versions: 2.9.0, 3.0.0-alpha3
Reporter: Robert Kanter
Assignee: Robert Kanter


We've seen (very rarely) a test failure in 
{{TestRM#testApplicationKillAtAcceptedState}}

{noformat}
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestRM.testApplicationKillAtAcceptedState(TestRM.java:645)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6358) Cache the resolved hosts prevent calls to InetAddress.getByName and normalizeHost

2017-03-16 Thread Jose Miguel Arreola (JIRA)
Jose Miguel Arreola created YARN-6358:
-

 Summary: Cache the resolved hosts prevent calls to 
InetAddress.getByName and normalizeHost
 Key: YARN-6358
 URL: https://issues.apache.org/jira/browse/YARN-6358
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: resourcemanager, security
Reporter: Jose Miguel Arreola


When running performance tests, we noticed that a lot of time is taken in 
resolving the host address.
In our specific scenario, we saw the function 
org.apache.hadoop.security.SecurityUtil.getInetAddressByName taking a lot of 
time to resolve the hosts, and the same function is called a lot of times.
I saw that org.apache.hadoop.yarn.server.resourcemanager.NodesListManager has a 
cached resolver already because of the same reason.
So, the proposal is, to make this cache generic and use it to save some time in 
this functions that we already know, but have it available so the cache can be 
used anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6357) Implement TimelineCollector#putEntitiesAsync

2017-03-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929048#comment-15929048
 ] 

Varun Saxena commented on YARN-6357:


bq. We have two separate methods on our API putEntities and putEntitiesAsync 
and they should have different behavior beyond waiting for the request to be 
sent. I can file a separate bug from this one dealing with exception handling 
to tackle the sync vs async nature.
Sorry couldn't get this. Which exception handling part will be handled in this 
JIRA? IIUC, this JIRA is intended to check isAsync flag and call flush 
immediately for a sync call.

> Implement TimelineCollector#putEntitiesAsync
> 
>
> Key: YARN-6357
> URL: https://issues.apache.org/jira/browse/YARN-6357
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Haibo Chen
>  Labels: yarn-5355-merge-blocker
>
> As discovered and discussed in YARN-5269 the 
> TimelineCollector#putEntitiesAsync method is currently not implemented and 
> TimelineCollector#putEntities is asynchronous.
> TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync 
> correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... 
> with the correct argument. This argument does seem to make it into the 
> params, and on the server side TimelineCollectorWebService#putEntities 
> correctly pulls the async parameter from the rest call. See line 156:
> {code}
> boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
> {code}
> However, this is where the problem starts. It simply calls 
> TimelineCollector#putEntities and ignores the value of isAsync. It should 
> instead have called TimelineCollector#putEntitiesAsync, which is currently 
> not implemented.
> putEntities should call putEntitiesAsync and then after that call 
> writer.flush()
> The fact that we flush on close and we flush periodically should be more of a 
> concern of avoiding data loss; close in case sync is never called and the 
> periodic flush to guard against having data from slow writers get buffered 
> for a long time and expose us to risk of loss in case the collector crashes 
> with data in its buffers. Size-based flush is a different concern to avoid 
> blowing up memory footprint.
> The spooling behavior is also somewhat separate.
> We have two separate methods on our API putEntities and putEntitiesAsync and 
> they should have different behavior beyond waiting for the request to be 
> sent. I can file a separate bug from this one dealing with exception handling 
> to tackle the sync vs async nature. During the meeting today I was thinking 
> about the HBase writer that has a flush, which definitely blocks until data 
> is flushed to HBase (ignoring the spooling for the moment).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6357) Implement TimelineCollector#putEntitiesAsync

2017-03-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929037#comment-15929037
 ] 

Varun Saxena commented on YARN-6357:


Correct. This is what I was suspecting in the previous call.
In YARN-3367 sync/async changes were only made on the client side.

> Implement TimelineCollector#putEntitiesAsync
> 
>
> Key: YARN-6357
> URL: https://issues.apache.org/jira/browse/YARN-6357
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Haibo Chen
>  Labels: yarn-5355-merge-blocker
>
> As discovered and discussed in YARN-5269 the 
> TimelineCollector#putEntitiesAsync method is currently not implemented and 
> TimelineCollector#putEntities is asynchronous.
> TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync 
> correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... 
> with the correct argument. This argument does seem to make it into the 
> params, and on the server side TimelineCollectorWebService#putEntities 
> correctly pulls the async parameter from the rest call. See line 156:
> {code}
> boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
> {code}
> However, this is where the problem starts. It simply calls 
> TimelineCollector#putEntities and ignores the value of isAsync. It should 
> instead have called TimelineCollector#putEntitiesAsync, which is currently 
> not implemented.
> putEntities should call putEntitiesAsync and then after that call 
> writer.flush()
> The fact that we flush on close and we flush periodically should be more of a 
> concern of avoiding data loss; close in case sync is never called and the 
> periodic flush to guard against having data from slow writers get buffered 
> for a long time and expose us to risk of loss in case the collector crashes 
> with data in its buffers. Size-based flush is a different concern to avoid 
> blowing up memory footprint.
> The spooling behavior is also somewhat separate.
> We have two separate methods on our API putEntities and putEntitiesAsync and 
> they should have different behavior beyond waiting for the request to be 
> sent. I can file a separate bug from this one dealing with exception handling 
> to tackle the sync vs async nature. During the meeting today I was thinking 
> about the HBase writer that has a flush, which definitely blocks until data 
> is flushed to HBase (ignoring the spooling for the moment).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6339) Improve performance for createAndGetApplicationReport

2017-03-16 Thread yunjiong zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yunjiong zhao updated YARN-6339:

Attachment: YARN-6339.002.patch

Update patch for more improvement.
Change RMAppImpl.logAggregationStatus from HashMap to ConcurrentHashMap so even 
hold a readlock, we can safely update logAggregationStatus.
Then return Collections.unmodifiableMap to avoid create too many HashMap.



> Improve performance for createAndGetApplicationReport
> -
>
> Key: YARN-6339
> URL: https://issues.apache.org/jira/browse/YARN-6339
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: yunjiong zhao
>Assignee: yunjiong zhao
> Attachments: YARN-6339.001.patch, YARN-6339.002.patch
>
>
> There are two performance issue when calling createAndGetApplicationReport:
> One is inside ProtoUtils.convertFromProtoFormat, replace is too slow for 
> clusters which have more than 3000 nodes. Use substring is much better: 
> https://issues.apache.org/jira/browse/YARN-6285?focusedCommentId=15923241&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15923241
> Another one is inside getLogAggregationReportsForApp, if some application's 
> LogAggregationStatus is TIME_OUT, every time it was called it will create an 
> HashMap which will produce lots of garbage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6352) Header injections are possible in the application proxy servlet

2017-03-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929026#comment-15929026
 ] 

Varun Saxena commented on YARN-6352:


[~Naganarasimha], did you try with latest trunk code?
This issue does not seem to come after Jetty was upgraded to version 9 from 
previous version 6. 
Seems this vulnerability has been fixed in Jetty in some version between 6.1.26 
to 9.3.11.

> Header injections are possible in the application proxy servlet
> ---
>
> Key: YARN-6352
> URL: https://issues.apache.org/jira/browse/YARN-6352
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: headerInjection.png, YARN-6352.001.patch
>
>
> This issue was found in WVS security tool. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6319) race condition between deleting app dir and deleting container dir

2017-03-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929009#comment-15929009
 ] 

Haibo Chen commented on YARN-6319:
--

Thanks [~zhiguohong] for more explanation on option 2. While I agree with you 
that post-callback can completely avoid race condition, linearizing container 
cleanup and app cleanup will unnecessarily slow down the application state 
transition process which other tasks, such as log aggregation, depend on. 
Especially when you have a lot of containers for a given application, 
previously the app dir cleanup task can be running concurrently with all 
container cleanup tasks, now it will need to wait for all container cleanup 
tasks to finish. The point I wan to make is that the race condition is safe to 
have as long as we ignore the fileNotException error during deletion. I notice 
YARN-2902 add the code to ignore FileNotExistent error code for LCE. Is it 
included in the version where you ran into this issue?

> race condition between deleting app dir and deleting container dir
> --
>
> Key: YARN-6319
> URL: https://issues.apache.org/jira/browse/YARN-6319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Hong Zhiguo
>Assignee: Hong Zhiguo
>
> Last container (on one node) of one app complete
> |--> triggers async deletion of container dir (container cleanup)
> |--> triggers async deletion of app dir (app cleanup)
> For LCE, deletion is done by container-executor. The "app cleanup" lists 
> sub-dir (step 1), and then unlink items one by one(step 2).   If a file is 
> deleted by "container cleanup" between step 1 and step2, it'll report below 
> error and breaks the deletion.
> {code}
> ContainerExecutor: Couldn't delete file 
> $LOCAL/usercache/$USER/appcache/application_1481785469354_353539/container_1481785469354_353539_01_28/$FILE
>  - No such file or directory
> {code}
> This app dir then escape the cleanup. And that's why we always have many app 
> dirs left there.
> solution 1: just ignore the error without breaking in 
> container-executor.c::delete_path()
> solution 2: use a lock to serialize the cleanup of same app dir.
> solution 3: backoff and retry on error
> Comments are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6357) Implement TimelineCollector#putEntitiesAsync

2017-03-16 Thread Joep Rottinghuis (JIRA)
Joep Rottinghuis created YARN-6357:
--

 Summary: Implement TimelineCollector#putEntitiesAsync
 Key: YARN-6357
 URL: https://issues.apache.org/jira/browse/YARN-6357
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: ATSv2, timelineserver
Affects Versions: YARN-2928
Reporter: Joep Rottinghuis
Assignee: Haibo Chen


As discovered and discussed in YARN-5269 the TimelineCollector#putEntitiesAsync 
method is currently not implemented and TimelineCollector#putEntities is 
asynchronous.

TimelineV2ClientImpl#putEntities vs TimelineV2ClientImpl#putEntitiesAsync 
correctly call TimelineEntityDispatcher#dispatchEntities(boolean sync,... with 
the correct argument. This argument does seem to make it into the params, and 
on the server side TimelineCollectorWebService#putEntities correctly pulls the 
async parameter from the rest call. See line 156:
{code}
boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
{code}
However, this is where the problem starts. It simply calls 
TimelineCollector#putEntities and ignores the value of isAsync. It should 
instead have called TimelineCollector#putEntitiesAsync, which is currently not 
implemented.
putEntities should call putEntitiesAsync and then after that call writer.flush()
The fact that we flush on close and we flush periodically should be more of a 
concern of avoiding data loss; close in case sync is never called and the 
periodic flush to guard against having data from slow writers get buffered for 
a long time and expose us to risk of loss in case the collector crashes with 
data in its buffers. Size-based flush is a different concern to avoid blowing 
up memory footprint.
The spooling behavior is also somewhat separate.
We have two separate methods on our API putEntities and putEntitiesAsync and 
they should have different behavior beyond waiting for the request to be sent. 
I can file a separate bug from this one dealing with exception handling to 
tackle the sync vs async nature. During the meeting today I was thinking about 
the HBase writer that has a flush, which definitely blocks until data is 
flushed to HBase (ignoring the spooling for the moment).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6342) Issues in async API of TimelineClient

2017-03-16 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-6342:
---
Labels: yarn-5355-merge-blocker  (was: )

> Issues in async API of TimelineClient
> -
>
> Key: YARN-6342
> URL: https://issues.apache.org/jira/browse/YARN-6342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>  Labels: yarn-5355-merge-blocker
>
> Found these with [~rohithsharma] while browsing the code
> - In stop: it calls shutdownNow which doens't wait for pending tasks, should 
> it use shutdown instead ?
> {code}
> public void stop() {
>   LOG.info("Stopping TimelineClient.");
>   executor.shutdownNow();
>   try {
> executor.awaitTermination(DRAIN_TIME_PERIOD, TimeUnit.MILLISECONDS);
>   } catch (InterruptedException e) {
> {code}
> - In TimelineClientImpl#createRunnable:
> If any exception happens when publish one entity 
> (publishWithoutBlockingOnQueue), the thread exists. I think it should try 
> best effort to continue publishing the timeline entities, one failure should 
> not cause all followup entities not published.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6342) Issues in async API of TimelineClient

2017-03-16 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928977#comment-15928977
 ] 

Joep Rottinghuis commented on YARN-6342:


A possible other approach is to code / configure one overall timeout under 
which we need to stay to shut down all clients. We can then give each client a 
fraction thereof and keep track of how much is left. I'm doing something 
similar in the spooling code see YARN-4061 and HBASE-17018 (thought that isn't 
complete yet and I need to pick up that work again).

Wrt. data loss on shut-down, note that the loss will be limited to 
TIMELINE_SERVICE_WRITER_FLUSH_INTERVAL_SECONDS, which defaults to 1 minute. On 
average the loss could be half that, but in the worst case it would be 30 
seconds.
The timeout during shutdown and the timout at which we detect HBase doesn't 
accept writes (and we end up spooling to file) should be carefully tuned to not 
loose any data under normal operating circumstances, even when HBase is down.
Perhaps we should have a config for this, or at least have one be a multiple of 
the other. I'll keep this in mind in the spooling work.

> Issues in async API of TimelineClient
> -
>
> Key: YARN-6342
> URL: https://issues.apache.org/jira/browse/YARN-6342
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>
> Found these with [~rohithsharma] while browsing the code
> - In stop: it calls shutdownNow which doens't wait for pending tasks, should 
> it use shutdown instead ?
> {code}
> public void stop() {
>   LOG.info("Stopping TimelineClient.");
>   executor.shutdownNow();
>   try {
> executor.awaitTermination(DRAIN_TIME_PERIOD, TimeUnit.MILLISECONDS);
>   } catch (InterruptedException e) {
> {code}
> - In TimelineClientImpl#createRunnable:
> If any exception happens when publish one entity 
> (publishWithoutBlockingOnQueue), the thread exists. I think it should try 
> best effort to continue publishing the timeline entities, one failure should 
> not cause all followup entities not published.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6326) Shouldn't use AppAttemptIds to fetch applications while AM Simulator tracks app in SLS

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928954#comment-15928954
 ] 

Hadoop QA commented on YARN-6326:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 1 new + 233 unchanged 
- 45 fixed = 234 total (was 278) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m  7s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6326 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859155/YARN-6326.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d16fb8a89b94 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09ad8ef |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15304/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15304/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15304/testRe

[jira] [Created] (YARN-6356) Allow different values of yarn.log-aggregation.retain-seconds for succeeded and failed jobs

2017-03-16 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-6356:
---

 Summary: Allow different values of 
yarn.log-aggregation.retain-seconds for succeeded and failed jobs
 Key: YARN-6356
 URL: https://issues.apache.org/jira/browse/YARN-6356
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: log-aggregation
Reporter: Robert Kanter


It would be useful to have a value of {{yarn.log-aggregation.retain-seconds}} 
for succeeded jobs and a different value for failed/killed jobs.  For jobs that 
succeeded, you typically don't care about the logs, so a shorter retention time 
is fine (and saves space/blocks in HDFS).  For jobs that failed or were killed, 
the logs are much more important, and it's likely to want to keep them around 
for longer so you have time to look at them.

For instance, you could set it to keep logs for succeeded jobs for 1 day and 
logs for failed/killed jobs for 1 week.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6355) Interceptor framework for the YARN ApplicationMasterService

2017-03-16 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928925#comment-15928925
 ] 

Arun Suresh commented on YARN-6355:
---

[~vinodkv] / [~subru] / [~leftnoteasy] Thoughts ?

> Interceptor framework for the YARN ApplicationMasterService
> ---
>
> Key: YARN-6355
> URL: https://issues.apache.org/jira/browse/YARN-6355
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
> <-> RM communication and enforce policies. This is used both by YARN 
> federation (YARN-2915) as well as Distributed Scheduling (YARN-2877).
> This JIRA proposes to introduce a similar framework on the the RM side, so 
> that pluggable policies can be enforced on ApplicationMasterService centrally 
> as well.
> This would be similar in spirit to a Java Servlet Filter Chain. Where the 
> order of the interceptors can declared externally.
> Once possible usecase would be:
> the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
> over the {{ApplicationMasterService}}. It would probably be better to 
> implement it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6355) Interceptor framework for the YARN ApplicationMasterService

2017-03-16 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-6355:
-

 Summary: Interceptor framework for the YARN 
ApplicationMasterService
 Key: YARN-6355
 URL: https://issues.apache.org/jira/browse/YARN-6355
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Arun Suresh
Assignee: Arun Suresh


Currently on the NM, we have the {{AMRMProxy}} framework to intercept the AM 
<-> RM communication and enforce policies. This is used both by YARN federation 
(YARN-2915) as well as Distributed Scheduling (YARN-2877).

This JIRA proposes to introduce a similar framework on the the RM side, so that 
pluggable policies can be enforced on ApplicationMasterService centrally as 
well.

This would be similar in spirit to a Java Servlet Filter Chain. Where the order 
of the interceptors can declared externally.

Once possible usecase would be:
the {{OpportunisticContainerAllocatorAMService}} is implemented as a wrapper 
over the {{ApplicationMasterService}}. It would probably be better to implement 
it as an Interceptor.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6111) [SLS] The realtimetrack.json is empty

2017-03-16 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928897#comment-15928897
 ] 

Yufei Gu commented on YARN-6111:


Mostly like you don't run SLS successfully. There are several things to check. 
Check the SLS doc to see if you miss somethings, like configurations. Check 
your SLS logs, you might need to modify log4j.properties in SLS to enable more 
logs (YARN-6324 is trying to fix it).  

> [SLS] The realtimetrack.json is empty 
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.0, 2.7.3
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>  Labels: test
> Fix For: 2.7.3
>
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6345) Add container tags to resource requests

2017-03-16 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928896#comment-15928896
 ] 

Subru Krishnan commented on YARN-6345:
--

[~miklos.szeg...@cloudera.com], we are currently thinking of this tag in ASC 
(and RR) but not in the CLC. We could potentially use the work you are doing in 
YARN-5986 if & when we decide to propagate it to the CLC . Makes sense?

> Add container tags to resource requests
> ---
>
> Key: YARN-6345
> URL: https://issues.apache.org/jira/browse/YARN-6345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> This JIRA introduces the notion of container tags.
> When an application submits container requests, it is allowed to attach to 
> them a set of string tags. The corresponding resource requests will also 
> carry these tags.
> For example, a container that will be used for running an HBase Master can be 
> marked with the tag "hb-m". Another one belonging to a ZooKeeper application, 
> can be marked as "zk".
> Through container tags, we will be able to express constraints that refer to 
> containers with the given tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6346) Expose API in ApplicationSubmissionContext to specify placement constraints

2017-03-16 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos reassigned YARN-6346:


Assignee: Konstantinos Karanasos

> Expose API in ApplicationSubmissionContext to specify placement constraints
> ---
>
> Key: YARN-6346
> URL: https://issues.apache.org/jira/browse/YARN-6346
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> We propose to extend the API of the {{ApplicationSubmissionContext}} to be 
> able to express placement constraints (e.g., affinity and anti-affinity) when 
> an application gets submitted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6354) LeveldbRMStateStore can parse invalid keys when recovering reservations

2017-03-16 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-6354:
-
Priority: Major  (was: Critical)
 Summary: LeveldbRMStateStore can parse invalid keys when recovering 
reservations  (was: RM fails to upgrade to 2.8 with leveldb state store)

I found another instance where a rolling upgrade to 2.8 with leveldb did work 
successfully, so I dug a bit deeper into why this doesn't always fail.  It 
turns out that normally the reservation state keys happen to be the last keys 
in the database and therefore it works.  If the database happens to have any 
relatively short keys after the reservation keys then it breaks.  My local dev 
database had some short, lowercase keys leftover in it from some prior work, 
and that's how I ran into the issue.

Since it looks like this happens to not be a problem for now with "normal" RM 
leveldb databases I lowered the severity and updated the headline accordingly.

> LeveldbRMStateStore can parse invalid keys when recovering reservations
> ---
>
> Key: YARN-6354
> URL: https://issues.apache.org/jira/browse/YARN-6354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>
> When trying to upgrade an RM to 2.8 it fails with a 
> StringIndexOutOfBoundsException trying to load reservation state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5468) Scheduling of long-running applications

2017-03-16 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928880#comment-15928880
 ] 

Konstantinos Karanasos commented on YARN-5468:
--

Hi [~grey], we have a design doc but we are in the process of updating it to 
reflect the latest discussions -- I will upload it within the next few days.
We have had extended discussions with [~leftnoteasy] to make sure we share the 
same APIs for defining constraints.

In brief, we will share the same constraint expressions (affinity and 
anti-affinity), but in a way that will allow to add more expressive constraints 
later.
The user will be able to define application-wide constraints when submitting 
the application through the ApplicationSubmissionContext (YARN-6346).
Moreover, given that we consider using such constraints for applications with 
long-running containers (exclusively, at least in the beginning), we are 
planning to place containers with constraints in a holistic fashion (looking at 
multiple container requests and constraints at the same time), trading some 
scheduling latency for better placement decisions (given that these containers 
will run for hours/days/months, scheduling latency is not as critical). This 
placement will be performed outside the Capacity/Fair Scheduler to make sure we 
don't affect the scheduling latency of existing applications.

> Scheduling of long-running applications
> ---
>
> Key: YARN-5468
> URL: https://issues.apache.org/jira/browse/YARN-5468
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacityscheduler, fairscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5468.prototype.patch
>
>
> This JIRA is about the scheduling of applications with long-running tasks.
> It will include adding support to the YARN for a richer set of scheduling 
> constraints (such as affinity, anti-affinity, cardinality and time 
> constraints), and extending the schedulers to take them into account during 
> placement of containers to nodes.
> We plan to have both an online version that will accommodate such requests as 
> they arrive, as well as a Long-running Application Planner that will make 
> more global decisions by considering multiple applications at once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-4590) SLS(Scheduler Load Simulator) web pages can't load css and js resource

2017-03-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu resolved YARN-4590.

Resolution: Duplicate

> SLS(Scheduler Load Simulator) web pages can't load css and js resource 
> ---
>
> Key: YARN-4590
> URL: https://issues.apache.org/jira/browse/YARN-4590
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: xupeng
>Priority: Minor
>
> HadoopVersion : 2.6.0 / with patch YARN-4367-branch-2
> 1. run command "./slsrun.sh 
> --input-rumen=../sample-data/2jobs2min-rumen-jh.json 
> --output-dir=../sample-data/"
> success
> 2. open web page "http://10.6.128.88:10001/track"; 
> can not load css and js resource 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6354) RM fails to upgrade to 2.8 with leveldb state store

2017-03-16 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928843#comment-15928843
 ] 

Jason Lowe commented on YARN-6354:
--

Sample stacktrace:
{noformat}
2017-03-16 15:17:26,616 INFO  [main] service.AbstractService 
(AbstractService.java:noteFailure(272)) - Service ResourceManager failed in 
state STARTED; cause: java.lang.StringIndexOutOfBoundsException: String index 
out of range: -17
java.lang.StringIndexOutOfBoundsException: String index out of range: -17
at java.lang.String.substring(String.java:1931)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore.loadReservationState(LeveldbRMStateStore.java:289)
at 
org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore.loadState(LeveldbRMStateStore.java:274)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:690)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1097)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1137)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1133)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1133)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1173)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1338)
{noformat}

This was broken by YARN-3736.  The recovery code is seeking to the 
RM_RESERVATION_KEY_PREFIX but failing to verify that the keys it sees in the 
loop actually have that key prefix.  Here's the relevant code:
{code}
  iter = new LeveldbIterator(db);
  iter.seek(bytes(RM_RESERVATION_KEY_PREFIX));
  while (iter.hasNext()) {
Entry entry = iter.next();
String key = asString(entry.getKey());

String planReservationString =
key.substring(RM_RESERVATION_KEY_PREFIX.length());
String[] parts = planReservationString.split(SEPARATOR);
if (parts.length != 2) {
  LOG.warn("Incorrect reservation state key " + key);
  continue;
}
{code}

The only way to terminate this loop is when the iterator runs out of keys, 
therefore the iteration loop will scan through *all* the keys in the database 
starting at the reservation key to the end.  If any key encountered is too 
short then we'll get the out of bounds exception when we try to do the 
substring.  

Pinging [~adhoot] and [~asuresh] who were involved in YARN-3736.

> RM fails to upgrade to 2.8 with leveldb state store
> ---
>
> Key: YARN-6354
> URL: https://issues.apache.org/jira/browse/YARN-6354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Priority: Critical
>
> When trying to upgrade an RM to 2.8 it fails with a 
> StringIndexOutOfBoundsException trying to load reservation state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6353) Clean up OrderingPolicy javadoc

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928833#comment-15928833
 ] 

Hadoop QA commented on YARN-6353:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 9 unchanged - 18 fixed = 11 total (was 27) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 882 unchanged - 15 fixed = 882 total (was 897) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 41m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6353 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859149/YARN-6353.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 58a5750af438 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 09ad8ef |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15303/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/15303/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results 

[jira] [Commented] (YARN-6146) Add Builder methods for TimelineEntityFilters

2017-03-16 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928811#comment-15928811
 ] 

Haibo Chen commented on YARN-6146:
--

Thanks for your comments [~varun_saxena]! 
bq. why not use the builder in existing createTimelineEntityFilters method. Why 
create a new one?
This is due to the method call in TimelineReaderWebServices.getFlows()
{code:java}
TimelineEntityFilters entityFilters =   
TimelineReaderWebServicesUtils.createTimelineEntityFilters(limit, null, 
null, null, null, null, null, null, null);  
entityFilters.setCreatedTimeBegin(range.dateStart); 
entityFilters.setCreatedTimeEnd(range.dateEnd);
{code}
In order to get rid of the setCreatedTimeBegin() and setCreatedTimeEnd() so 
that all fields in TimelineEntityFitlers are immutable (The two fields are set 
once here, so they are effectively immutable), 
I created another createTimelineEntityFilters() method that accepts the same 
set of parameters, but the type of createdTimeStart and createdTimeEnd can be 
Long.
I guess I could parse the time range in TimelineReaderWebServices.getFlows() 
and then pass it to the existing createTimelineEntityFilters() method, but 
that'd be
in consistent with the fact that we parse the rest of params in 
createTimelineEntityFilters. If you think keep using the existing method is 
preferred, I could update the patch accordingly.

Will adress the rest of your comments in the new patch.


> Add Builder methods for TimelineEntityFilters
> -
>
> Key: YARN-6146
> URL: https://issues.apache.org/jira/browse/YARN-6146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
> Attachments: YARN-6146.01.patch, YARN-6146.02.patch, 
> YARN-6146-YARN-5355.01.patch, YARN-6146-YARN-5355.02.patch
>
>
> The timeline filters are evolving and can be add more and more filters. It is 
> better to start using Builder methods rather than changing constructor every 
> time for adding new filters. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6354) RM fails to upgrade to 2.8 with leveldb state store

2017-03-16 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-6354:


 Summary: RM fails to upgrade to 2.8 with leveldb state store
 Key: YARN-6354
 URL: https://issues.apache.org/jira/browse/YARN-6354
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.8.0
Reporter: Jason Lowe
Priority: Critical


When trying to upgrade an RM to 2.8 it fails with a 
StringIndexOutOfBoundsException trying to load reservation state.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6345) Add container tags to resource requests

2017-03-16 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928825#comment-15928825
 ] 

Konstantinos Karanasos commented on YARN-6345:
--

[~miklos.szeg...@cloudera.com], my worry is that the specific container 
specification you describe and the container tags are very different 
semantically, so I am not sure it is a good idea to consolidate them, even if 
we were to use key-value pairs for the container tags (which does not seem like 
it's needed).
But let's hear other people's opinions too.

> Add container tags to resource requests
> ---
>
> Key: YARN-6345
> URL: https://issues.apache.org/jira/browse/YARN-6345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> This JIRA introduces the notion of container tags.
> When an application submits container requests, it is allowed to attach to 
> them a set of string tags. The corresponding resource requests will also 
> carry these tags.
> For example, a container that will be used for running an HBase Master can be 
> marked with the tag "hb-m". Another one belonging to a ZooKeeper application, 
> can be marked as "zk".
> Through container tags, we will be able to express constraints that refer to 
> containers with the given tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6326) Shouldn't use AppAttemptIds to fetch applications while AM Simulator tracks app in SLS

2017-03-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6326:
---
Attachment: YARN-6326.004.patch

Fixed one style issue. Added methods to delete metric output files once unit 
test is done.

> Shouldn't use AppAttemptIds to fetch applications while AM Simulator tracks 
> app in SLS
> --
>
> Key: YARN-6326
> URL: https://issues.apache.org/jira/browse/YARN-6326
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6326.001.patch, YARN-6326.002.patch, 
> YARN-6326.003.patch, YARN-6326.004.patch
>
>
> This causes a NPE issue. Beside the NPE, the metrics won't reflect the 
> different attempts. We should pass ApplicationId Instead of AppAttemptId. The 
> NPE caused by the issue:
> {code}
> 2017-03-13 20:43:39,153 INFO appmaster.AMSimulator: Submit a new application 
> application_1489463017173_0001
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.getApplicationAttempt(AbstractYarnScheduler.java:327)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.getSchedulerApp(FairScheduler.java:1028)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.FairSchedulerMetrics.trackApp(FairSchedulerMetrics.java:68)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.ResourceSchedulerWrapper.addTrackedApp(ResourceSchedulerWrapper.java:799)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.trackApp(AMSimulator.java:338)
>   at 
> org.apache.hadoop.yarn.sls.appmaster.AMSimulator.firstStep(AMSimulator.java:156)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Exception in thread "pool-6-thread-1" java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.sls.scheduler.TaskRunner$Task.run(TaskRunner.java:105)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6352) Header injections are possible in the application proxy servlet

2017-03-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928695#comment-15928695
 ] 

Varun Saxena commented on YARN-6352:


Apps#toAppId looks like a generic enough method which may not be only used for 
web pages.
How about catching the exception in WebAppProxyServlet and then sending back a 
custom message?

> Header injections are possible in the application proxy servlet
> ---
>
> Key: YARN-6352
> URL: https://issues.apache.org/jira/browse/YARN-6352
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: headerInjection.png, YARN-6352.001.patch
>
>
> This issue was found in WVS security tool. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6352) Header injections are possible in the application proxy servlet

2017-03-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928695#comment-15928695
 ] 

Varun Saxena edited comment on YARN-6352 at 3/16/17 7:31 PM:
-

Apps#toAppId looks like a generic enough method which may not be only used for 
constructing a message which is sent back in HTTP response.
How about catching the exception in WebAppProxyServlet and then sending back a 
custom message?


was (Author: varun_saxena):
Apps#toAppId looks like a generic enough method which may not be only used for 
web pages.
How about catching the exception in WebAppProxyServlet and then sending back a 
custom message?

> Header injections are possible in the application proxy servlet
> ---
>
> Key: YARN-6352
> URL: https://issues.apache.org/jira/browse/YARN-6352
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: headerInjection.png, YARN-6352.001.patch
>
>
> This issue was found in WVS security tool. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6353) Clean up OrderingPolicy javadoc

2017-03-16 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6353:
---
Attachment: YARN-6353.001.patch

Fixed the javadocs and cleaned up some other minor stuff.

> Clean up OrderingPolicy javadoc
> ---
>
> Key: YARN-6353
> URL: https://issues.apache.org/jira/browse/YARN-6353
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
>  Labels: javadoc
> Attachments: YARN-6353.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6353) Clean up OrderingPolicy javadoc

2017-03-16 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6353:
--

 Summary: Clean up OrderingPolicy javadoc
 Key: YARN-6353
 URL: https://issues.apache.org/jira/browse/YARN-6353
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 2.8.0
Reporter: Daniel Templeton
Assignee: Daniel Templeton
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6352) Header injections are possible in the application proxy servlet

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928671#comment-15928671
 ] 

Hadoop QA commented on YARN-6352:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 12m 
58s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
22s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6352 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859146/YARN-6352.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b2ebd85ea616 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ba62b50 |
| Default Java | 1.8.0_121 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/15302/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15302/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15302/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Header injections are possible in the application proxy servlet
> ---
>
> Key: YARN-6352
> URL: https://issues.apache.org/jira/browse/YARN-6352
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>

[jira] [Commented] (YARN-6273) TestAMRMClient#testAllocationWithBlacklist fails intermittently

2017-03-16 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928638#comment-15928638
 ] 

Ray Chiang commented on YARN-6273:
--

Test

> TestAMRMClient#testAllocationWithBlacklist fails intermittently
> ---
>
> Key: YARN-6273
> URL: https://issues.apache.org/jira/browse/YARN-6273
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Ray Chiang
>
> I'm seeing this unit test fail in trunk:
> testAllocationWithBlacklist(org.apache.hadoop.yarn.client.api.impl.TestAMRMClient)
>   Time elapsed: 0.738 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:555)
> at org.junit.Assert.assertEquals(Assert.java:542)
> at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAllocationWithBlacklist(TestAMRMClient.java:721)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6352) Header injections are possible in the application proxy servlet

2017-03-16 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6352:

Attachment: YARN-6352.001.patch

> Header injections are possible in the application proxy servlet
> ---
>
> Key: YARN-6352
> URL: https://issues.apache.org/jira/browse/YARN-6352
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: headerInjection.png, YARN-6352.001.patch
>
>
> This issue was found in WVS security tool. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3767) Yarn Scheduler Load Simulator does not work

2017-03-16 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928579#comment-15928579
 ] 

Yufei Gu commented on YARN-3767:


SLS is broken in some ways. We are trying to fix it in Umbrella JIRA YARN-5065.

> Yarn Scheduler Load Simulator does not work
> ---
>
> Key: YARN-3767
> URL: https://issues.apache.org/jira/browse/YARN-3767
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
> Environment: OS X 10.10.  JDK 1.7
>Reporter: David Kjerrumgaard
>
> Running the SLS, as per the instructions on the web results in a 
> NullPointerException being thrown.
> Steps followed to create error:
> 1) Download Apache Hadoop 2.7.0 tarball from Apache site
> 2) Untar 2.7.0 tarball into /opt directory
> 3) Execute the following command: 
> /opt/hadoop-2.7.0/share/hadoop/tools/sls//bin/slsrun.sh 
> --input-rumen=/opt/hadoop-2.7.0/share/hadoop/tools/sls/sample-data/2jobs2min-rumen-jh.json
>  --output-dir=/tmp
> Results in the following error:
> 15/06/04 10:25:41 INFO rmnode.RMNodeImpl: a2118.smile.com:2 Node Transitioned 
> from NEW to RUNNING
> 15/06/04 10:25:41 INFO capacity.CapacityScheduler: Added node 
> a2118.smile.com:2 clusterResource: 
> 15/06/04 10:25:41 INFO util.RackResolver: Resolved a2115.smile.com to 
> /default-rack
> 15/06/04 10:25:41 INFO resourcemanager.ResourceTrackerService: NodeManager 
> from node a2115.smile.com(cmPort: 3 httpPort: 80) registered with capability: 
> , assigned nodeId a2115.smile.com:3
> 15/06/04 10:25:41 INFO rmnode.RMNodeImpl: a2115.smile.com:3 Node Transitioned 
> from NEW to RUNNING
> 15/06/04 10:25:41 INFO capacity.CapacityScheduler: Added node 
> a2115.smile.com:3 clusterResource: 
> Exception in thread "main" java.lang.RuntimeException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
>   at 
> org.apache.hadoop.yarn.sls.SLSRunner.startAMFromRumenTraces(SLSRunner.java:398)
>   at org.apache.hadoop.yarn.sls.SLSRunner.startAM(SLSRunner.java:250)
>   at org.apache.hadoop.yarn.sls.SLSRunner.start(SLSRunner.java:145)
>   at org.apache.hadoop.yarn.sls.SLSRunner.main(SLSRunner.java:528)
> Caused by: java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.hash(ConcurrentHashMap.java:333)
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:988)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:126)
>   ... 4 more



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files

2017-03-16 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928563#comment-15928563
 ] 

Kuhu Shukla commented on YARN-6315:
---

mvn install is marking a lot of hdfs files as duplicate. I have asked on 
HDFS-11431 since that seems related.
{code}
[WARNING] Rule 1: org.apache.maven.plugins.enforcer.BanDuplicateClasses failed 
with message:
Duplicate classes found:

  Found in:
org.apache.hadoop:hadoop-client-api:jar:3.0.0-alpha3-SNAPSHOT:compile

org.apache.hadoop:hadoop-client-minicluster:jar:3.0.0-alpha3-SNAPSHOT:compile
  Duplicate classes:

org/apache/hadoop/hdfs/qjournal/protocol/QJournalProtocolProtos$GetJournalStateRequestProto$Builder.class
 
{code}

> Improve LocalResourcesTrackerImpl#isResourcePresent to return false for 
> corrupted files
> ---
>
> Key: YARN-6315
> URL: https://issues.apache.org/jira/browse/YARN-6315
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.8.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-6315.001.patch, YARN-6315.002.patch, 
> YARN-6315.003.patch, YARN-6315.004.patch
>
>
> We currently check if a resource is present by making sure that the file 
> exists locally. There can be a case where the LocalizationTracker thinks that 
> it has the resource if the file exists but with size 0 or less than the 
> "expected" size of the LocalResource. This JIRA tracks the change to harden 
> the isResourcePresent call to address that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3767) Yarn Scheduler Load Simulator does not work

2017-03-16 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino resolved YARN-3767.

Resolution: Won't Fix

>From my read of the conversation, I think this is not an actual issue. I will 
>close it  for now, please re-open if you disagree.

> Yarn Scheduler Load Simulator does not work
> ---
>
> Key: YARN-3767
> URL: https://issues.apache.org/jira/browse/YARN-3767
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
> Environment: OS X 10.10.  JDK 1.7
>Reporter: David Kjerrumgaard
>
> Running the SLS, as per the instructions on the web results in a 
> NullPointerException being thrown.
> Steps followed to create error:
> 1) Download Apache Hadoop 2.7.0 tarball from Apache site
> 2) Untar 2.7.0 tarball into /opt directory
> 3) Execute the following command: 
> /opt/hadoop-2.7.0/share/hadoop/tools/sls//bin/slsrun.sh 
> --input-rumen=/opt/hadoop-2.7.0/share/hadoop/tools/sls/sample-data/2jobs2min-rumen-jh.json
>  --output-dir=/tmp
> Results in the following error:
> 15/06/04 10:25:41 INFO rmnode.RMNodeImpl: a2118.smile.com:2 Node Transitioned 
> from NEW to RUNNING
> 15/06/04 10:25:41 INFO capacity.CapacityScheduler: Added node 
> a2118.smile.com:2 clusterResource: 
> 15/06/04 10:25:41 INFO util.RackResolver: Resolved a2115.smile.com to 
> /default-rack
> 15/06/04 10:25:41 INFO resourcemanager.ResourceTrackerService: NodeManager 
> from node a2115.smile.com(cmPort: 3 httpPort: 80) registered with capability: 
> , assigned nodeId a2115.smile.com:3
> 15/06/04 10:25:41 INFO rmnode.RMNodeImpl: a2115.smile.com:3 Node Transitioned 
> from NEW to RUNNING
> 15/06/04 10:25:41 INFO capacity.CapacityScheduler: Added node 
> a2115.smile.com:3 clusterResource: 
> Exception in thread "main" java.lang.RuntimeException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
>   at 
> org.apache.hadoop.yarn.sls.SLSRunner.startAMFromRumenTraces(SLSRunner.java:398)
>   at org.apache.hadoop.yarn.sls.SLSRunner.startAM(SLSRunner.java:250)
>   at org.apache.hadoop.yarn.sls.SLSRunner.start(SLSRunner.java:145)
>   at org.apache.hadoop.yarn.sls.SLSRunner.main(SLSRunner.java:528)
> Caused by: java.lang.NullPointerException
>   at 
> java.util.concurrent.ConcurrentHashMap.hash(ConcurrentHashMap.java:333)
>   at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:988)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:126)
>   ... 4 more



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6345) Add container tags to resource requests

2017-03-16 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928522#comment-15928522
 ] 

Miklos Szegedi commented on YARN-6345:
--

bq. How would it simplify the CLC though?
There is no need to add another proto entry for container configs next to 
container tags and the propagation logic can be the same as well.

> Add container tags to resource requests
> ---
>
> Key: YARN-6345
> URL: https://issues.apache.org/jira/browse/YARN-6345
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> This JIRA introduces the notion of container tags.
> When an application submits container requests, it is allowed to attach to 
> them a set of string tags. The corresponding resource requests will also 
> carry these tags.
> For example, a container that will be used for running an HBase Master can be 
> marked with the tag "hb-m". Another one belonging to a ZooKeeper application, 
> can be marked as "zk".
> Through container tags, we will be able to express constraints that refer to 
> containers with the given tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6352) Header injections are possible in the application proxy servlet

2017-03-16 Thread Naganarasimha G R (JIRA)
Naganarasimha G R created YARN-6352:
---

 Summary: Header injections are possible in the application proxy 
servlet
 Key: YARN-6352
 URL: https://issues.apache.org/jira/browse/YARN-6352
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Naganarasimha G R
Assignee: Naganarasimha G R
 Attachments: headerInjection.png

This issue was found in WVS security tool. 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6352) Header injections are possible in the application proxy servlet

2017-03-16 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-6352:

Attachment: headerInjection.png

> Header injections are possible in the application proxy servlet
> ---
>
> Key: YARN-6352
> URL: https://issues.apache.org/jira/browse/YARN-6352
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: headerInjection.png
>
>
> This issue was found in WVS security tool. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928479#comment-15928479
 ] 

Hadoop QA commented on YARN-6315:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 13m  
0s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 33 unchanged - 1 fixed = 33 total (was 34) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
59s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6315 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859127/YARN-6315.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b5cdb77555a2 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ba62b50 |
| Default Java | 1.8.0_121 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/15301/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15301/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15301/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve LocalResourcesTrackerImpl#isResourcePresent to return false for 
> corrupted files
> ---
>
> Key: YARN-6315
> URL: https://i

[jira] [Created] (YARN-6351) Have RM match relaxedLocality request via time instead of missedOpportunities

2017-03-16 Thread Roni Burd (JIRA)
Roni Burd created YARN-6351:
---

 Summary: Have RM match relaxedLocality request via time instead of 
missedOpportunities 
 Key: YARN-6351
 URL: https://issues.apache.org/jira/browse/YARN-6351
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacityscheduler, yarn
Reporter: Roni Burd


When using relaxLocality=true, the current CapacityScheduler strategy is to 
wait a certain amount of missedOpportunities to schedule a request in a node, a 
rack or off_switch. This means that the missedOpportunities param is dependent 
on the number of nodes in the cluster and the duration of each container.  

A different strategy would be to wait a configurable amount of time before 
deciding to go to different location. 

This JIRA proposal is to extract the current behavior into a pluggable strategy 
pattern and create a new strategy that is simply based on time.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6350) Add JMX counters to track locality matching (node, rack, off_switch)

2017-03-16 Thread Roni Burd (JIRA)
Roni Burd created YARN-6350:
---

 Summary: Add JMX counters to track locality matching (node, rack, 
off_switch)
 Key: YARN-6350
 URL: https://issues.apache.org/jira/browse/YARN-6350
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: metrics, yarn
Reporter: Roni Burd
Priority: Minor


When using realxLocality=true, it would be nice to have metrics to see how well 
the RM is fulfilling the requests. This helps to tune the relaxLocality params 
and compare the behavior.

The proposal is to have 3 metrics exposed via JMX
-node matching % 
-rack matching %
-off_switch matching % 

Each one represents the matching that occurred compared to the total matched 
asked. 

The metrics would have to take into account the type of request (e.g node, ANY, 
etc)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6334) TestRMFailover#testAutomaticFailover always passes even when it should fail

2017-03-16 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-6334:
---
Summary: TestRMFailover#testAutomaticFailover always passes even when it 
should fail  (was: TestRMFailover#testAutomaticFailover always passes even RM 
didn't transition to Standby.)

> TestRMFailover#testAutomaticFailover always passes even when it should fail
> ---
>
> Key: YARN-6334
> URL: https://issues.apache.org/jira/browse/YARN-6334
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-6334.001.patch, YARN-6334.002.patch, 
> YARN-6334.003.patch, YARN-6334.004.patch
>
>
> Due to a bug in {{while}} loop. 
> {code}
> int maxWaitingAttempts = 2000;
> while (maxWaitingAttempts-- > 0 ) {
>   if (rm.getRMContext().getHAServiceState() == HAServiceState.STANDBY) {
> break;
>   }
>   Thread.sleep(1);
> }
> Assert.assertFalse("RM didn't transition to Standby ",
> maxWaitingAttempts == 0);
> {code}
> maxWaitingAttempts is -1 if RM didn't transition to Standby. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6146) Add Builder methods for TimelineEntityFilters

2017-03-16 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928452#comment-15928452
 ] 

Varun Saxena commented on YARN-6146:


Thanks [~haibochen] for the patch. Looks straightforward. Couple of comments.

# In TimelineReaderWebServicesUtils, why not use the builder in existing 
createTimelineEntityFilters method. Why create a new one?
# Seems some of the checkstyle issues can be fixed
# In Builder maybe rename fromid to fromId


> Add Builder methods for TimelineEntityFilters
> -
>
> Key: YARN-6146
> URL: https://issues.apache.org/jira/browse/YARN-6146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
> Attachments: YARN-6146.01.patch, YARN-6146.02.patch, 
> YARN-6146-YARN-5355.01.patch, YARN-6146-YARN-5355.02.patch
>
>
> The timeline filters are evolving and can be add more and more filters. It is 
> better to start using Builder methods rather than changing constructor every 
> time for adding new filters. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files

2017-03-16 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-6315:
--
Attachment: YARN-6315.004.patch

Thank you [~jlowe] for the feedback. I have made the changes accordingly.

> Improve LocalResourcesTrackerImpl#isResourcePresent to return false for 
> corrupted files
> ---
>
> Key: YARN-6315
> URL: https://issues.apache.org/jira/browse/YARN-6315
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.8.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-6315.001.patch, YARN-6315.002.patch, 
> YARN-6315.003.patch, YARN-6315.004.patch
>
>
> We currently check if a resource is present by making sure that the file 
> exists locally. There can be a case where the LocalizationTracker thinks that 
> it has the resource if the file exists but with size 0 or less than the 
> "expected" size of the LocalResource. This JIRA tracks the change to harden 
> the isResourcePresent call to address that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6335) Port slider's groovy unit tests to yarn native services

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928408#comment-15928408
 ] 

Hadoop QA commented on YARN-6335:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 112 new or modified 
test files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 2s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 22s{color} 
| {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core
 generated 24 new + 34 unchanged - 0 fixed = 58 total (was 34) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core:
 The patch generated 959 new + 225 unchanged - 0 fixed = 1184 total (was 225) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
6s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12858929/YARN-6335-yarn-native-services.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 1490fe259149 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 39ef50c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/15300/artifact/patchprocess/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15300/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_h

[jira] [Commented] (YARN-6315) Improve LocalResourcesTrackerImpl#isResourcePresent to return false for corrupted files

2017-03-16 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928292#comment-15928292
 ] 

Jason Lowe commented on YARN-6315:
--

Thanks for updating the patch!

Catching Exception is too wide of a net here, IMHO.  It masks serious issues 
like SecurityException (which is not a normal I/O permission denied type of 
error), NullPointerException, IllegalArgumentException, 
UnsupportedOperationException, etc.  If the operation really is unsupported 
then this is going to think every resource is missing after it localizes it 
which isn't good.  It would be a dist cache that doesn't cache.  We should just 
catch NoSuchFileException and IOException.  In the no such file case we can 
simply log it isn't there, but in the IOException case since we don't really 
know what happened we should log the full exception trace rather than just the 
exception message to give proper context for debug.

Nit: The attributes variable declaration should be as close to the usage as 
necessary.  It only needs to be just before the the {{try}} block rather than 
outside the {{if}} block.


> Improve LocalResourcesTrackerImpl#isResourcePresent to return false for 
> corrupted files
> ---
>
> Key: YARN-6315
> URL: https://issues.apache.org/jira/browse/YARN-6315
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.8.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-6315.001.patch, YARN-6315.002.patch, 
> YARN-6315.003.patch
>
>
> We currently check if a resource is present by making sure that the file 
> exists locally. There can be a case where the LocalizationTracker thinks that 
> it has the resource if the file exists but with size 0 or less than the 
> "expected" size of the LocalResource. This JIRA tracks the change to harden 
> the isResourcePresent call to address that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5468) Scheduling of long-running applications

2017-03-16 Thread Lei Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928256#comment-15928256
 ] 

Lei Guo commented on YARN-5468:
---

[~kkaranasos], do we have a design document? I'd like to get more details 
regarding the difference between this Jira and Yarn-4902, and whether there 
will be some feature interaction between these two features.

> Scheduling of long-running applications
> ---
>
> Key: YARN-5468
> URL: https://issues.apache.org/jira/browse/YARN-5468
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacityscheduler, fairscheduler
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5468.prototype.patch
>
>
> This JIRA is about the scheduling of applications with long-running tasks.
> It will include adding support to the YARN for a richer set of scheduling 
> constraints (such as affinity, anti-affinity, cardinality and time 
> constraints), and extending the schedulers to take them into account during 
> placement of containers to nodes.
> We plan to have both an online version that will accommodate such requests as 
> they arrive, as well as a Long-running Application Planner that will make 
> more global decisions by considering multiple applications at once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6309) Fair scheduler docs should have the queue and queuePlacementPolicy elements listed in bold so that they're easier to see

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928249#comment-15928249
 ] 

Hadoop QA commented on YARN-6309:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 13m 
25s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6309 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859111/YARN-6309.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 5b7a34f8e2fe 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7114bad |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/15299/artifact/patchprocess/branch-mvninstall-root.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15299/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fair scheduler docs should have the queue and queuePlacementPolicy elements 
> listed in bold so that they're easier to see
> 
>
> Key: YARN-6309
> URL: https://issues.apache.org/jira/browse/YARN-6309
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: esmaeil mirzaee
>Priority: Minor
>  Labels: docs, newbie
> Attachments: YARN_6309.001.patch, YARN-6309.patch
>
>
> Under {{Allocation file format : Queue elements}}, all of the element names 
> should be bold, e.g. {{minResources}}, {{maxResources}}, etc.  Same for 
> {{Allocation file format : A queuePlacementPolicy element}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4051) ContainerKillEvent lost when container is still recovering and application finishes

2017-03-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928239#comment-15928239
 ] 

Hudson commented on YARN-4051:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11414 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11414/])
YARN-4051. ContainerKillEvent lost when container is still recovering (jlowe: 
rev 7114baddb627628a54cdab77f68504332a5a0e28)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/MockContainer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManagerRecovery.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/Container.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java


> ContainerKillEvent lost when container is still recovering and application 
> finishes
> ---
>
> Key: YARN-4051
> URL: https://issues.apache.org/jira/browse/YARN-4051
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: sandflee
>Assignee: sandflee
>Priority: Critical
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-4051.01.patch, YARN-4051.02.patch, 
> YARN-4051.03.patch, YARN-4051.04.patch, YARN-4051.05.patch, 
> YARN-4051.06.patch, YARN-4051.07.patch, YARN-4051.08.patch, 
> YARN-4051.08.patch-branch-2
>
>
> As in YARN-4050, NM event dispatcher is blocked, and container is in New 
> state, when we finish application, the container still alive even after NM 
> event dispatcher is unblocked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4051) ContainerKillEvent lost when container is still recovering and application finishes

2017-03-16 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928224#comment-15928224
 ] 

sandflee commented on YARN-4051:


Thanks [~jlowe] for your review and commit!

> ContainerKillEvent lost when container is still recovering and application 
> finishes
> ---
>
> Key: YARN-4051
> URL: https://issues.apache.org/jira/browse/YARN-4051
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: sandflee
>Assignee: sandflee
>Priority: Critical
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: YARN-4051.01.patch, YARN-4051.02.patch, 
> YARN-4051.03.patch, YARN-4051.04.patch, YARN-4051.05.patch, 
> YARN-4051.06.patch, YARN-4051.07.patch, YARN-4051.08.patch, 
> YARN-4051.08.patch-branch-2
>
>
> As in YARN-4050, NM event dispatcher is blocked, and container is in New 
> state, when we finish application, the container still alive even after NM 
> event dispatcher is unblocked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6309) Fair scheduler docs should have the queue and queuePlacementPolicy elements listed in bold so that they're easier to see

2017-03-16 Thread esmaeil mirzaee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

esmaeil mirzaee updated YARN-6309:
--
Attachment: YARN-6309.patch

I am sorry to be late. I have changed my branch to trunk, and I think it ccould 
solve the problem.


Best Wishes

> Fair scheduler docs should have the queue and queuePlacementPolicy elements 
> listed in bold so that they're easier to see
> 
>
> Key: YARN-6309
> URL: https://issues.apache.org/jira/browse/YARN-6309
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: esmaeil mirzaee
>Priority: Minor
>  Labels: docs, newbie
> Attachments: YARN_6309.001.patch, YARN-6309.patch
>
>
> Under {{Allocation file format : Queue elements}}, all of the element names 
> should be bold, e.g. {{minResources}}, {{maxResources}}, etc.  Same for 
> {{Allocation file format : A queuePlacementPolicy element}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6349) Container kill request from AM can be lost if container is still recovering

2017-03-16 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-6349:


 Summary: Container kill request from AM can be lost if container 
is still recovering
 Key: YARN-6349
 URL: https://issues.apache.org/jira/browse/YARN-6349
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Jason Lowe


If container recovery takes an excessive amount of time (e.g.: HDFS is slow) 
then the NM could start servicing requests before all containers have 
recovered.  If an AM tries to kill a container while it is still recovering 
then this kill request could be lost.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6349) Container kill request from AM can be lost if container is still recovering

2017-03-16 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928185#comment-15928185
 ] 

Jason Lowe commented on YARN-6349:
--

See YARN-4051 for related discussion.

> Container kill request from AM can be lost if container is still recovering
> ---
>
> Key: YARN-6349
> URL: https://issues.apache.org/jira/browse/YARN-6349
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Jason Lowe
>
> If container recovery takes an excessive amount of time (e.g.: HDFS is slow) 
> then the NM could start servicing requests before all containers have 
> recovered.  If an AM tries to kill a container while it is still recovering 
> then this kill request could be lost.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4051) ContainerKillEvent lost when container is still recovering and application finishes

2017-03-16 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15928139#comment-15928139
 ] 

Jason Lowe commented on YARN-4051:
--

+1 for the branch-2 patch as well.  The unit test failure appears to be 
unrelated, and the test passes for me locally with the patch applied.

Committing this.

> ContainerKillEvent lost when container is still recovering and application 
> finishes
> ---
>
> Key: YARN-4051
> URL: https://issues.apache.org/jira/browse/YARN-4051
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: sandflee
>Assignee: sandflee
>Priority: Critical
> Attachments: YARN-4051.01.patch, YARN-4051.02.patch, 
> YARN-4051.03.patch, YARN-4051.04.patch, YARN-4051.05.patch, 
> YARN-4051.06.patch, YARN-4051.07.patch, YARN-4051.08.patch, 
> YARN-4051.08.patch-branch-2
>
>
> As in YARN-4050, NM event dispatcher is blocked, and container is in New 
> state, when we finish application, the container still alive even after NM 
> event dispatcher is unblocked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927948#comment-15927948
 ] 

Hadoop QA commented on YARN-6141:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
20s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-6141 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859082/YARN-6141.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 8dbe6d22e5f2 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6d95866 |
| Default Java | 1.8.0_121 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/15298/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/15298/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ppc64le on Linux doesn't trigger __linux get_executable codepath
> 
>
> Key: YARN-6141
> URL: https://issues.apache.org/jira/browse/YARN-6141
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
> Environment: $ uname -a
> Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 
> 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Sonia Garudi
>  Labels: ppc64le
> Attachments: YARN-6141.patch
>
>
> On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' 
> project with the below error:
> Cannot safely determine executable path with a relative HADOOP_CONF_DIR on 
> this operating system.
> [WARNING]  #error Cannot safely determine executable path with a relative 
> HADOOP_CONF_DIR on this operating system.
> [WARNING]   ^
> [WARNING] make[2]: *** 
> [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o]
>  Error 1

[jira] [Updated] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath

2017-03-16 Thread Sonia Garudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sonia Garudi updated YARN-6141:
---
Attachment: YARN-6141.patch

> ppc64le on Linux doesn't trigger __linux get_executable codepath
> 
>
> Key: YARN-6141
> URL: https://issues.apache.org/jira/browse/YARN-6141
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
> Environment: $ uname -a
> Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 
> 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Sonia Garudi
>  Labels: ppc64le
> Attachments: YARN-6141.patch
>
>
> On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' 
> project with the below error:
> Cannot safely determine executable path with a relative HADOOP_CONF_DIR on 
> this operating system.
> [WARNING]  #error Cannot safely determine executable path with a relative 
> HADOOP_CONF_DIR on this operating system.
> [WARNING]   ^
> [WARNING] make[2]: *** 
> [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o]
>  Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2
> [WARNING] make: *** [all] Error 2
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Cmake version used :
> $ /usr/bin/cmake --version
> cmake version 2.8.12.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath

2017-03-16 Thread Sonia Garudi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sonia Garudi updated YARN-6141:
---
Attachment: (was: YARN-6141.diff)

> ppc64le on Linux doesn't trigger __linux get_executable codepath
> 
>
> Key: YARN-6141
> URL: https://issues.apache.org/jira/browse/YARN-6141
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
> Environment: $ uname -a
> Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 
> 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Sonia Garudi
>  Labels: ppc64le
>
> On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' 
> project with the below error:
> Cannot safely determine executable path with a relative HADOOP_CONF_DIR on 
> this operating system.
> [WARNING]  #error Cannot safely determine executable path with a relative 
> HADOOP_CONF_DIR on this operating system.
> [WARNING]   ^
> [WARNING] make[2]: *** 
> [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o]
>  Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2
> [WARNING] make: *** [all] Error 2
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Cmake version used :
> $ /usr/bin/cmake --version
> cmake version 2.8.12.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4051) ContainerKillEvent lost when container is still recovering and application finishes

2017-03-16 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927733#comment-15927733
 ] 

sandflee commented on YARN-4051:


update a patch for branch-2

> ContainerKillEvent lost when container is still recovering and application 
> finishes
> ---
>
> Key: YARN-4051
> URL: https://issues.apache.org/jira/browse/YARN-4051
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: sandflee
>Assignee: sandflee
>Priority: Critical
> Attachments: YARN-4051.01.patch, YARN-4051.02.patch, 
> YARN-4051.03.patch, YARN-4051.04.patch, YARN-4051.05.patch, 
> YARN-4051.06.patch, YARN-4051.07.patch, YARN-4051.08.patch, 
> YARN-4051.08.patch-branch-2
>
>
> As in YARN-4050, NM event dispatcher is blocked, and container is in New 
> state, when we finish application, the container still alive even after NM 
> event dispatcher is unblocked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4051) ContainerKillEvent lost when container is still recovering and application finishes

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927726#comment-15927726
 ] 

Hadoop QA commented on YARN-4051:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
41s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
4s{color} | {color:blue} The patch file was not named according to hadoop's 
naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute 
for instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 173 unchanged - 1 fixed = 173 total (was 174) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 55s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.yarn.server.nodemanager.webapp.TestNMWebServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | YARN-4051 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/a

[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927711#comment-15927711
 ] 

Hadoop QA commented on YARN-2962:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 250 unchanged - 2 fixed = 255 total (was 252) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 41m 
24s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-2962 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859042/YARN-2962.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux af4ac52c9b80 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6d95866 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/15296/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| 

[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-03-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927694#comment-15927694
 ] 

Hadoop QA commented on YARN-2962:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 250 unchanged - 2 fixed = 255 total (was 252) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-2962 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859042/YARN-2962.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 56f62c645868 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6d95866 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.

[jira] [Updated] (YARN-4051) ContainerKillEvent lost when container is still recovering and application finishes

2017-03-16 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee updated YARN-4051:
---
Attachment: YARN-4051.08.patch-branch-2

> ContainerKillEvent lost when container is still recovering and application 
> finishes
> ---
>
> Key: YARN-4051
> URL: https://issues.apache.org/jira/browse/YARN-4051
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: sandflee
>Assignee: sandflee
>Priority: Critical
> Attachments: YARN-4051.01.patch, YARN-4051.02.patch, 
> YARN-4051.03.patch, YARN-4051.04.patch, YARN-4051.05.patch, 
> YARN-4051.06.patch, YARN-4051.07.patch, YARN-4051.08.patch, 
> YARN-4051.08.patch-branch-2
>
>
> As in YARN-4050, NM event dispatcher is blocked, and container is in New 
> state, when we finish application, the container still alive even after NM 
> event dispatcher is unblocked.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-03-16 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-2962:
---
Attachment: (was: YARN-2962.009.patch)

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.006.patch, YARN-2962.007.patch, 
> YARN-2962.008.patch, YARN-2962.008.patch, YARN-2962.009.patch, 
> YARN-2962.01.patch, YARN-2962.04.patch, YARN-2962.05.patch, 
> YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2017-03-16 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-2962:
---
Attachment: YARN-2962.009.patch

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.006.patch, YARN-2962.007.patch, 
> YARN-2962.008.patch, YARN-2962.008.patch, YARN-2962.009.patch, 
> YARN-2962.01.patch, YARN-2962.04.patch, YARN-2962.05.patch, 
> YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath

2017-03-16 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15924166#comment-15924166
 ] 

Ayappan edited comment on YARN-6141 at 3/16/17 7:02 AM:


The predefined macros "__linux" and "linux" are not POSIX-compliant (neither 
are unix and __unix ), so ppc64le is in the right here.  They are allowed with 
GNU extensions, but should not be allowed for strict ANSI, which is implied by 
-std=c99.  The Hadoop code should be changed to use the compliant __linux__ .

See 
https://gcc.gnu.org/onlinedocs/cpp/System-specific-Predefined-Macros.html#System-specific-Predefined-Macros
 for discussion and guidance.  In particular:

"We are slowly phasing out all predefined macros which are outside the reserved 
namespace. You should never use them in new programs, and we encourage you to 
correct older code to use the parallel macros whenever you find it."  Here the 
"parallel macros" are those with underscores both before and after the name, 
such as __linux__ .

I do not know why the POWER back end is the only one that enforces this, but 
that point is moot.  Hadoop code should be changed.


was (Author: ayappan):
The predefined macros "__linux" and "linux"
" are not POSIX-compliant (neither are unix and __unix ), so ppc64le is in the 
right here.  They are allowed with GNU extensions, but should not be allowed 
for strict ANSI, which is implied by -std=c99.  The Hadoop code should be 
changed to use the compliant __linux__ .

See 
https://gcc.gnu.org/onlinedocs/cpp/System-specific-Predefined-Macros.html#System-specific-Predefined-Macros
 for discussion and guidance.  In particular:

"We are slowly phasing out all predefined macros which are outside the reserved 
namespace. You should never use them in new programs, and we encourage you to 
correct older code to use the parallel macros whenever you find it."  Here the 
"parallel macros" are those with underscores both before and after the name, 
such as __linux__ .

I do not know why the POWER back end is the only one that enforces this, but 
that point is moot.  Hadoop code should be changed.

> ppc64le on Linux doesn't trigger __linux get_executable codepath
> 
>
> Key: YARN-6141
> URL: https://issues.apache.org/jira/browse/YARN-6141
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
> Environment: $ uname -a
> Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 
> 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Sonia Garudi
>  Labels: ppc64le
> Attachments: YARN-6141.diff
>
>
> On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' 
> project with the below error:
> Cannot safely determine executable path with a relative HADOOP_CONF_DIR on 
> this operating system.
> [WARNING]  #error Cannot safely determine executable path with a relative 
> HADOOP_CONF_DIR on this operating system.
> [WARNING]   ^
> [WARNING] make[2]: *** 
> [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o]
>  Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2
> [WARNING] make: *** [all] Error 2
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Cmake version used :
> $ /usr/bin/cmake --version
> cmake version 2.8.12.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath

2017-03-16 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15924166#comment-15924166
 ] 

Ayappan edited comment on YARN-6141 at 3/16/17 7:00 AM:


The predefined macros "__linux" and "linux"
" are not POSIX-compliant (neither are unix and __unix ), so ppc64le is in the 
right here.  They are allowed with GNU extensions, but should not be allowed 
for strict ANSI, which is implied by -std=c99.  The Hadoop code should be 
changed to use the compliant __linux__ .

See 
https://gcc.gnu.org/onlinedocs/cpp/System-specific-Predefined-Macros.html#System-specific-Predefined-Macros
 for discussion and guidance.  In particular:

"We are slowly phasing out all predefined macros which are outside the reserved 
namespace. You should never use them in new programs, and we encourage you to 
correct older code to use the parallel macros whenever you find it."  Here the 
"parallel macros" are those with underscores both before and after the name, 
such as __linux__ .

I do not know why the POWER back end is the only one that enforces this, but 
that point is moot.  Hadoop code should be changed.


was (Author: ayappan):
The predefined macros "__linux" and "linux" are not POSIX-compliant (neither 
are "unix" and "__unix"), so ppc64le is in the right here.  They are allowed 
with GNU extensions, but should not be allowed for strict ANSI, which is 
implied by -std=c99.  The Hadoop code should be changed to use the compliant 
__linux__ .

See 
https://gcc.gnu.org/onlinedocs/cpp/System-specific-Predefined-Macros.html#System-specific-Predefined-Macros
 for discussion and guidance.  In particular:

"We are slowly phasing out all predefined macros which are outside the reserved 
namespace. You should never use them in new programs, and we encourage you to 
correct older code to use the parallel macros whenever you find it."  Here the 
"parallel macros" are those with underscores both before and after the name, 
such as __linux__ .

I do not know why the POWER back end is the only one that enforces this, but 
that point is moot.  Hadoop code should be changed.

> ppc64le on Linux doesn't trigger __linux get_executable codepath
> 
>
> Key: YARN-6141
> URL: https://issues.apache.org/jira/browse/YARN-6141
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
> Environment: $ uname -a
> Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 
> 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Sonia Garudi
>  Labels: ppc64le
> Attachments: YARN-6141.diff
>
>
> On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' 
> project with the below error:
> Cannot safely determine executable path with a relative HADOOP_CONF_DIR on 
> this operating system.
> [WARNING]  #error Cannot safely determine executable path with a relative 
> HADOOP_CONF_DIR on this operating system.
> [WARNING]   ^
> [WARNING] make[2]: *** 
> [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o]
>  Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2
> [WARNING] make: *** [all] Error 2
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Cmake version used :
> $ /usr/bin/cmake --version
> cmake version 2.8.12.2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org