[jira] [Commented] (YARN-4108) CapacityScheduler: Improve preemption to preempt only those containers that would satisfy the incoming request

2016-03-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188878#comment-15188878
 ] 

Jian He commented on YARN-4108:
---

For the new logic in PCPP,  every round it’s looping all  previously calculated 
killable containers and check whether it’s finished, if not, deducting the 
previously killable containers resource and then calculating the new extra set 
of containers to kill. I think in a simpler way, we can re-generate the whole 
new set of containers(as it already does) every round . Because in other 
scenarios, previously marked killable containers will may become invalid  too, 
then we need to repeat similar logic. That is:
- PCPP does not need to track the killableConainers.
- once PCPP calculated the containers to kill, replace that synchronously in 
PreemptionManager.
- when looping the queues, as done in the patch, getKillableResource and 
getKillableContainers accordingly from PreemptionManager.




> CapacityScheduler: Improve preemption to preempt only those containers that 
> would satisfy the incoming request
> --
>
> Key: YARN-4108
> URL: https://issues.apache.org/jira/browse/YARN-4108
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4108-design-doc-V3.pdf, 
> YARN-4108-design-doc-v1.pdf, YARN-4108-design-doc-v2.pdf, YARN-4108.1.patch, 
> YARN-4108.2.patch, YARN-4108.3.patch, YARN-4108.4.patch, YARN-4108.5.patch, 
> YARN-4108.6.patch, YARN-4108.poc.1.patch, YARN-4108.poc.2-WIP.patch, 
> YARN-4108.poc.3-WIP.patch, YARN-4108.poc.4-WIP.patch
>
>
> This is sibling JIRA for YARN-2154. We should make sure container preemption 
> is more effective.
> *Requirements:*:
> 1) Can handle case of user-limit preemption
> 2) Can handle case of resource placement requirements, such as: hard-locality 
> (I only want to use rack-1) / node-constraints (YARN-3409) / black-list (I 
> don't want to use rack1 and host\[1-3\])
> 3) Can handle preemption within a queue: cross user preemption (YARN-2113), 
> cross applicaiton preemption (such as priority-based (YARN-1963) / 
> fairness-based (YARN-3319)).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4666) API redirects from a standby resource manager drop query string parameters

2016-03-09 Thread KWON BYUNGCHANG (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated YARN-4666:
--
Attachment: YARN-4666.patch

I've attached patch.


> API redirects from a standby resource manager drop query string parameters
> --
>
> Key: YARN-4666
> URL: https://issues.apache.org/jira/browse/YARN-4666
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.1
>Reporter: Antonio Bellezza
> Attachments: YARN-4666.patch
>
>
> When making an API request with a query string to a secondary resource 
> manager, the redirection doesn't contain the query string.
> Example:
> {code}
> $ curl -v -H "Accept: application/json" 
> "http://standby-server.mydomain:8088/ws/v1/cluster/apps?limit=10&user=someuser";
> * Hostname was NOT found in DNS cache
> *   Trying 192.168.0.123...
> * Connected to standby-server.mydomain (192.168.0.123) port 8088 (#0)
> > GET /ws/v1/cluster/apps?limit=10&user=someuser HTTP/1.1
> > User-Agent: curl/7.35.0
> > Host: standby-server.mydomain:8088
> > Accept: application/json
> > 
> < HTTP/1.1 307 TEMPORARY_REDIRECT
> < Cache-Control: no-cache
> < Expires: Fri, 22 Jan 2016 16:43:42 GMT
> < Date: Fri, 22 Jan 2016 16:43:42 GMT
> < Pragma: no-cache
> < Expires: Fri, 22 Jan 2016 16:43:42 GMT
> < Date: Fri, 22 Jan 2016 16:43:42 GMT
> < Pragma: no-cache
> < Content-Type: text/plain; charset=UTF-8
> < Location: http://active-server.mydomain:8088/ws/v1/cluster/apps
> < Content-Length: 105
> * Server Jetty(6.1.26.hwx) is not blacklisted
> < Server: Jetty(6.1.26.hwx)
> < 
> This is standby RM. The redirect url is: 
> http://active-server.mydomain:8088/ws/v1/cluster/apps
> * Connection #0 to host standby-server.mydomain left intact
> {code}
> This may depend on RMWebAppFilter generating the redirect path from 
> request.getRequestURI(), which does not include query string parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-1100) Giving multiple commands to ContainerLaunchContext doesn't work as expected

2016-03-09 Thread Xiaoshuang LU (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoshuang LU updated YARN-1100:

Attachment: YARN-1100.patch

> Giving multiple commands to ContainerLaunchContext doesn't work as expected
> ---
>
> Key: YARN-1100
> URL: https://issues.apache.org/jira/browse/YARN-1100
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, nodemanager
>Affects Versions: 2.1.0-beta, 2.7.2
>Reporter: Sandy Ryza
> Attachments: YARN-1100.patch
>
>
> A ContainerLaunchContext accepts a list of commands (as strings) to be 
> executed to launch the container.  I would expect that giving a list with the 
> two commands "echo yolo" and "date" would print something like
> {code}
> yolo
> Mon Aug 26 14:40:23 PDT 2013
> {code}
> Instead it prints
> {code}
> yolo date
> {code}
> This is because the commands get executed with:
> {code}
> exec /bin/bash -c "echo yolo date"
> {code}
> To get the expected behavior I have to include semicolons at the end of each 
> command. At the very least, this should be documented, but I think better 
> would be for the NM to insert the semicolons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4783) Log aggregation failure for application when Nodemanager is restarted

2016-03-09 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188714#comment-15188714
 ] 

Surendra Singh Lilhore commented on YARN-4783:
--

Thanks [~jlowe] for the comment. 

Detail for the events from logs :
===

bq. 1. When the application completed
{noformat}
2014-03-01 19:11:03,527 INFO 
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=yarn 
OPERATION=Application Finished - Succeeded  TARGET=RMAppManager 
RESULT=SUCCESS  APPID=application_xx_y
{noformat}

bq. 2. When the HDFS token was cancelled by the RM (check for 'Cancelling 
HDFS_DELEGATION_TOKEN token 9 for yarn')
{noformat}
2014-03-01 19:21:12,347 INFO 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
 Token cancelation requested for identifier: HDFS_DELEGATION_TOKEN token 9 for 
yarn
{noformat}


bq. 3. When the nodemanager reconnected to the RM (and presumably started log 
aggregation shortly afterwards)

NM Lost
{noformat}
2014-03-01 19:13:42,079 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating 
Node host87:6 as it is now LOST
{noformat}

NM Added
{noformat}
2014-03-02 08:44:31,785 INFO 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: host87:6 
Node Transitioned from NEW to RUNNING
{noformat}

{noformat}
2014-03-02 08:44:29,705 ERROR 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService:
 Failed to setup application log directory for application_xx_y
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
 token (HDFS_DELEGATION_TOKEN token 9 for yarn) can't be found in cache
at org.apache.hadoop.ipc.Client.call(Client.java:1511)
at org.apache.hadoop.ipc.Client.call(Client.java:1447)
{noformat}

> Log aggregation failure for application when Nodemanager is restarted 
> --
>
> Key: YARN-4783
> URL: https://issues.apache.org/jira/browse/YARN-4783
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>
> Scenario :
> =
> 1.Start NM with user dsperf:hadoop
> 2.Configure linux-execute user as dsperf
> 3.Submit application with yarn user 
> 4.Once few containers are allocated to NM 1
> 5.Nodemanager 1 is stopped  (wait for expiry )
> 6.Start node manager after application is completed
> 7.Check the log aggregation is happening for the containers log in NMLocal 
> directory
> Expect Output :
> ===
> Log aggregation should be succesfull
> Actual Output :
> ===
> Log aggreation not successfull



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3998) Add retry-times to let NM re-launch container when it fails to run

2016-03-09 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188681#comment-15188681
 ] 

Jun Gong commented on YARN-3998:


Yes, it seems we need to deal with platform errors, however it still seems 
useful for user to specify retry policy that retry on what error codes because 
only user knows meaning of every error codes. 

> Add retry-times to let NM re-launch container when it fails to run
> --
>
> Key: YARN-3998
> URL: https://issues.apache.org/jira/browse/YARN-3998
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-3998.01.patch, YARN-3998.02.patch, 
> YARN-3998.03.patch, YARN-3998.04.patch, YARN-3998.05.patch, YARN-3998.06.patch
>
>
> I'd like to add a field(retry-times) in ContainerLaunchContext. When AM 
> launches containers, it could specify the value. Then NM will re-launch the 
> container 'retry-times' times when it fails to run(e.g.exit code is not 0). 
> It will save a lot of time. It avoids container localization. RM does not 
> need to re-schedule the container. And local files in container's working 
> directory will be left for re-use.(If container have downloaded some big 
> files, it does not need to re-download them when running again.) 
> We find it is useful in systems like Storm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188585#comment-15188585
 ] 

Hadoop QA commented on YARN-4784:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 1 new + 24 unchanged - 1 fixed = 25 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 24s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 44s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 150m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.re

[jira] [Commented] (YARN-4545) Allow YARN distributed shell to use ATS v1.5 APIs

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188490#comment-15188490
 ] 

Hadoop QA commented on YARN-4545:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 27s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 8s 
{color} | {color:red} root: patch generated 4 new + 47 unchanged - 0 fixed = 51 
total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 1s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_74. {color} |
|

[jira] [Commented] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188471#comment-15188471
 ] 

Hadoop QA commented on YARN-4784:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 1 new + 24 unchanged - 1 fixed = 25 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 38s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 55s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 169m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Do

[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188396#comment-15188396
 ] 

Hadoop QA commented on YARN-4457:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 2s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 34s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792376/YARN-4457.003.patch |
| JIRA Issue | YARN-4457 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 16b3b212e541 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2e040d3 |
| Default Java | 1.7.0_95 |
| Multi-JDK versio

[jira] [Commented] (YARN-4713) Warning by unchecked conversion in TestTimelineWebServices

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188394#comment-15188394
 ] 

Hadoop QA commented on YARN-4713:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 22s {color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed 
with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 21s {color} 
| {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed 
with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 50s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.timeline.webapp.TestTimelineWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792385/YARN-4713.2.patch |
| JIRA Issue | YARN-4713 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 40f70485fece 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:

[jira] [Commented] (YARN-4696) EntityGroupFSTimelineStore to work in the absence of an RM

2016-03-09 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188392#comment-15188392
 ] 

Li Lu commented on YARN-4696:
-

Last patch LGTM. I believe that a volatile boolean would also do the work of 
the newly added {{stopExecutors}} (since we only read and write separately but 
never use CAS), but this is extremely minor so I'm fine with either one. I'll 
wait for about half a day and commit if there's no further concerns raised. 

> EntityGroupFSTimelineStore to work in the absence of an RM
> --
>
> Key: YARN-4696
> URL: https://issues.apache.org/jira/browse/YARN-4696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-4696-001.patch, YARN-4696-002.patch, 
> YARN-4696-003.patch, YARN-4696-005.patch, YARN-4696-006.patch, 
> YARN-4696-007.patch, YARN-4696-008.patch, YARN-4696-009.patch, 
> YARN-4696-010.patch, YARN-4696-012.patch, YARN-4696-013.patch, 
> YARN-4696-014.patch
>
>
> {{EntityGroupFSTimelineStore}} now depends on an RM being up and running; the 
> configuration pointing to it. This is a new change, and impacts testing where 
> you have historically been able to test without an RM running.
> The sole purpose of the probe is to automatically determine if an app is 
> running; it falls back to "unknown" if not. If the RM connection was 
> optional, the "unknown" codepath could be called directly, relying on age of 
> file as a metric of completion
> Options
> # add a flag to disable RM connect
> # skip automatically if RM not defined/set to 0.0.0.0
> # disable retries on yarn client IPC; if it fails, tag app as unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3863) Support complex filters in TimelineReader

2016-03-09 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188389#comment-15188389
 ] 

Sangjin Lee commented on YARN-3863:
---

The latest patch looks good. Just one remaining question that I'd like a quick 
answer on. It seems that the null check of the 
{{TimelineEntityReader.getFilters()}} is not entirely consistent. There are 23 
invocations throughout the code (including this patch), and none of them checks 
for null but for these 3:

{noformat}
ApplicationEntityReader.constructFilterListBasedOnFilters()
ApplicationEntityReader.parseEntity()
GenericEntityReader.parseEntity()
{noformat}

I suspect null should not happen. In that case, would it make sense to make it 
clear (in javadoc or comments) that filters can never be null and skip the null 
check? Otherwise, we should check for null everywhere. Thoughts?

> Support complex filters in TimelineReader
> -
>
> Key: YARN-3863
> URL: https://issues.apache.org/jira/browse/YARN-3863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3863-YARN-2928.v2.01.patch, 
> YARN-3863-YARN-2928.v2.02.patch, YARN-3863-YARN-2928.v2.03.patch, 
> YARN-3863-YARN-2928.v2.04.patch, YARN-3863-feature-YARN-2928.wip.003.patch, 
> YARN-3863-feature-YARN-2928.wip.01.patch, 
> YARN-3863-feature-YARN-2928.wip.02.patch, 
> YARN-3863-feature-YARN-2928.wip.04.patch, 
> YARN-3863-feature-YARN-2928.wip.05.patch
>
>
> Currently filters in timeline reader will return an entity only if all the 
> filter conditions hold true i.e. only AND operation is supported. We can 
> support OR operation for the filters as well. Additionally as primary backend 
> implementation is HBase, we can design our filters in a manner, where they 
> closely resemble HBase Filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4784:
---
Attachment: YARN-4784.001.patch

> fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value
> --
>
> Key: YARN-4784
> URL: https://issues.apache.org/jira/browse/YARN-4784
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, yarn
>Affects Versions: 2.7.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0
>
> Attachments: YARN-4784.001.patch
>
>
> The configure item defaultQueueSchedulingPolicy should not accept fifo as a 
> value since it is an invalid value for non-leaf queues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4784:
---
Attachment: (was: YARN-4784.001.patch)

> fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value
> --
>
> Key: YARN-4784
> URL: https://issues.apache.org/jira/browse/YARN-4784
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, yarn
>Affects Versions: 2.7.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0
>
>
> The configure item defaultQueueSchedulingPolicy should not accept fifo as a 
> value since it is an invalid value for non-leaf queues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4713) Warning by unchecked conversion in TestTimelineWebServices

2016-03-09 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated YARN-4713:
---
Attachment: YARN-4713.2.patch

> Warning by unchecked conversion in TestTimelineWebServices 
> ---
>
> Key: YARN-4713
> URL: https://issues.apache.org/jira/browse/YARN-4713
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Tsuyoshi Ozawa
>Assignee: Gabor Liptak
>  Labels: newbie
> Attachments: YARN-4713.1.patch, YARN-4713.2.patch
>
>
> [WARNING] 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestTimelineWebServices.java:[123,38]
>  [unchecked] unchecked conversion
> {code}
>   Enumeration names = mock(Enumeration.class);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4545) Allow YARN distributed shell to use ATS v1.5 APIs

2016-03-09 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-4545:

Attachment: YARN-4545-trunk.007.patch

Thanks [~ste...@apache.org]! I uploaded a new patch to refactor the preparation 
process. Right now it takes a FileSystem object. Could you please double check 
if this works fine with you code? Thanks! 

> Allow YARN distributed shell to use ATS v1.5 APIs
> -
>
> Key: YARN-4545
> URL: https://issues.apache.org/jira/browse/YARN-4545
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-4545-YARN-4265.001.patch, 
> YARN-4545-trunk.001.patch, YARN-4545-trunk.002.patch, 
> YARN-4545-trunk.003.patch, YARN-4545-trunk.004.patch, 
> YARN-4545-trunk.005.patch, YARN-4545-trunk.006.patch, 
> YARN-4545-trunk.007.patch
>
>
> We can use YARN distributed shell as a demo for the ATS v1.5 APIs. We need to 
> allow distributed shell post data with ATS v1.5 API if 1.5 is enabled in the 
> system. We also need to provide a sample plugin to read those data out. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4457) Cleanup unchecked types for EventHandler

2016-03-09 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-4457:
---
Attachment: YARN-4457.003.patch

Rebased patch.

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4786) Enhance hbase coprocessor aggregation operations:GLOBAL_MIN, LATEST_MIN etc and FINAL attributes

2016-03-09 Thread Vrushali C (JIRA)
Vrushali C created YARN-4786:


 Summary: Enhance hbase coprocessor aggregation 
operations:GLOBAL_MIN, LATEST_MIN etc and FINAL attributes
 Key: YARN-4786
 URL: https://issues.apache.org/jira/browse/YARN-4786
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vrushali C



As part of YARN-4062, Joep and I had been discussing about min, max operations 
and the final attributes. 

YARN-4062 has GLOBAL_MIN, GLOBAL_MAX and SUM operations. It presently indicates 
SUM_FINAL for a cell that contains a metric that is the final value for the 
metric.

We should enhance this such that the set of aggregation dimensions SUM, MIN, 
MAX, etc. are really set of a per-column level and shouldn't be passed from the 
client, but be instrumented by the ColumnHelper infrastructure instead. We 
should probably use a different tag value for that.

Both aggregation dimension and this "FINAL_VALUE" or whatever abbreviation we 
use are needed to determine the right thing to do for compaction. Only one 
value needs to have this final value bit / tag set.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4785) inconsistent value type of the "type" field for LeafQueueInfo in response of RM REST API - cluster/scheduler

2016-03-09 Thread Jayesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayesh updated YARN-4785:
-
Description: 
I see inconsistent value type ( String and Array ) of the "type" field for 
LeafQueueInfo in response of RM REST API - cluster/scheduler
as per the spec it should be always String.

here is the sample output ( removed non-relevant fields )
{code}
{
  "scheduler": {
"schedulerInfo": {
  "type": "capacityScheduler",
  "capacity": 100,
  ...
  "queueName": "root",
  "queues": {
"queue": [
  {
"type": "capacitySchedulerLeafQueueInfo",
"capacity": 0.1,

  },
  {
"type": [
  "capacitySchedulerLeafQueueInfo"
],
"capacity": 0.1,
"queueName": "test-queue",
"state": "RUNNING",

  },
  {
"type": [
  "capacitySchedulerLeafQueueInfo"
],
"capacity": 2.5,

  },
  {
"capacity": 25,

"state": "RUNNING",
"queues": {
  "queue": [
{
  "capacity": 6,
  "state": "RUNNING",
  "queues": {
"queue": [
  {
"type": "capacitySchedulerLeafQueueInfo",
"capacity": 100,
...
  }
]
  },
  
},
{
  "capacity": 6,
  ...
  "state": "RUNNING",
  "queues": {
"queue": [
  {
"type": "capacitySchedulerLeafQueueInfo",
"capacity": 100,
...
  }
]
  },
  ...
},
...
  ]
},
...
  }
]
  }
}
  }
}
{code}

  was:
Inconsistent type field value type in JSON response of the cluster/scheduler
here is the sample output ( removed non-relevant fields )
{code}
{
  "scheduler": {
"schedulerInfo": {
  "type": "capacityScheduler",
  "capacity": 100,
  ...
  "queueName": "root",
  "queues": {
"queue": [
  {
"type": "capacitySchedulerLeafQueueInfo",
"capacity": 0.1,

  },
  {
"type": [
  "capacitySchedulerLeafQueueInfo"
],
"capacity": 0.1,
"queueName": "test-queue",
"state": "RUNNING",

  },
  {
"type": [
  "capacitySchedulerLeafQueueInfo"
],
"capacity": 2.5,

  },
  {
"capacity": 25,

"state": "RUNNING",
"queues": {
  "queue": [
{
  "capacity": 6,
  "state": "RUNNING",
  "queues": {
"queue": [
  {
"type": "capacitySchedulerLeafQueueInfo",
"capacity": 100,
...
  }
]
  },
  
},
{
  "capacity": 6,
  ...
  "state": "RUNNING",
  "queues": {
"queue": [
  {
"type": "capacitySchedulerLeafQueueInfo",
"capacity": 100,
...
  }
]
  },
  ...
},
...
  ]
},
...
  }
]
  }
}
  }
}
{code}


> inconsistent value type of the "type" field for LeafQueueInfo in response of 
> RM REST API - cluster/scheduler
> 
>
> Key: YARN-4785
> URL: https://issues.apache.org/jira/browse/YARN-4785
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.6.0
>Reporter: Jayesh
>  Labels: REST_API
>
> I see inconsistent value type ( String and Array ) of the "type" field for 
> LeafQueueInfo in response of RM REST API - cluster/scheduler
> as per the spec it should be always String.
> here is the sample output ( removed non-relevant fields )
> {code}
> {
>   "scheduler": {
> "schedulerInfo": {
>

[jira] [Updated] (YARN-4785) inconsistent value type of the "type" field for LeafQueueInfo in response of RM REST API - cluster/scheduler

2016-03-09 Thread Jayesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayesh updated YARN-4785:
-
Summary: inconsistent value type of the "type" field for LeafQueueInfo in 
response of RM REST API - cluster/scheduler  (was: type field inconsistencies 
in response of RM REST API - cluster/scheduler)

> inconsistent value type of the "type" field for LeafQueueInfo in response of 
> RM REST API - cluster/scheduler
> 
>
> Key: YARN-4785
> URL: https://issues.apache.org/jira/browse/YARN-4785
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.6.0
>Reporter: Jayesh
>  Labels: REST_API
>
> Inconsistent type field value type in JSON response of the cluster/scheduler
> here is the sample output ( removed non-relevant fields )
> {code}
> {
>   "scheduler": {
> "schedulerInfo": {
>   "type": "capacityScheduler",
>   "capacity": 100,
>   ...
>   "queueName": "root",
>   "queues": {
> "queue": [
>   {
> "type": "capacitySchedulerLeafQueueInfo",
> "capacity": 0.1,
> 
>   },
>   {
> "type": [
>   "capacitySchedulerLeafQueueInfo"
> ],
> "capacity": 0.1,
> "queueName": "test-queue",
> "state": "RUNNING",
> 
>   },
>   {
> "type": [
>   "capacitySchedulerLeafQueueInfo"
> ],
> "capacity": 2.5,
> 
>   },
>   {
> "capacity": 25,
> 
> "state": "RUNNING",
> "queues": {
>   "queue": [
> {
>   "capacity": 6,
>   "state": "RUNNING",
>   "queues": {
> "queue": [
>   {
> "type": "capacitySchedulerLeafQueueInfo",
> "capacity": 100,
> ...
>   }
> ]
>   },
>   
> },
> {
>   "capacity": 6,
>   ...
>   "state": "RUNNING",
>   "queues": {
> "queue": [
>   {
> "type": "capacitySchedulerLeafQueueInfo",
> "capacity": 100,
> ...
>   }
> ]
>   },
>   ...
> },
> ...
>   ]
> },
> ...
>   }
> ]
>   }
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4785) type field inconsistencies in response of RM REST API - cluster/scheduler

2016-03-09 Thread Jayesh (JIRA)
Jayesh created YARN-4785:


 Summary: type field inconsistencies in response of RM REST API - 
cluster/scheduler
 Key: YARN-4785
 URL: https://issues.apache.org/jira/browse/YARN-4785
 Project: Hadoop YARN
  Issue Type: Bug
  Components: webapp
Affects Versions: 2.6.0
Reporter: Jayesh


Inconsistent type field value type in JSON response of the cluster/scheduler
here is the sample output ( removed non-relevant fields )
{code}
{
  "scheduler": {
"schedulerInfo": {
  "type": "capacityScheduler",
  "capacity": 100,
  ...
  "queueName": "root",
  "queues": {
"queue": [
  {
"type": "capacitySchedulerLeafQueueInfo",
"capacity": 0.1,

  },
  {
"type": [
  "capacitySchedulerLeafQueueInfo"
],
"capacity": 0.1,
"queueName": "test-queue",
"state": "RUNNING",

  },
  {
"type": [
  "capacitySchedulerLeafQueueInfo"
],
"capacity": 2.5,

  },
  {
"capacity": 25,

"state": "RUNNING",
"queues": {
  "queue": [
{
  "capacity": 6,
  "state": "RUNNING",
  "queues": {
"queue": [
  {
"type": "capacitySchedulerLeafQueueInfo",
"capacity": 100,
...
  }
]
  },
  
},
{
  "capacity": 6,
  ...
  "state": "RUNNING",
  "queues": {
"queue": [
  {
"type": "capacitySchedulerLeafQueueInfo",
"capacity": 100,
...
  }
]
  },
  ...
},
...
  ]
},
...
  }
]
  }
}
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-4062) Add the flush and compaction functionality via coprocessors and scanners for flow run table

2016-03-09 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee reassigned YARN-4062:
-

Assignee: Sangjin Lee  (was: Vrushali C)

> Add the flush and compaction functionality via coprocessors and scanners for 
> flow run table
> ---
>
> Key: YARN-4062
> URL: https://issues.apache.org/jira/browse/YARN-4062
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Sangjin Lee
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4062-YARN-2928.04.patch, 
> YARN-4062-YARN-2928.05.patch, YARN-4062-YARN-2928.06.patch, 
> YARN-4062-YARN-2928.07.patch, YARN-4062-YARN-2928.1.patch, 
> YARN-4062-feature-YARN-2928.01.patch, YARN-4062-feature-YARN-2928.02.patch, 
> YARN-4062-feature-YARN-2928.03.patch
>
>
> As part of YARN-3901, coprocessor and scanner is being added for storing into 
> the flow_run table. It also needs a flush & compaction processing in the 
> coprocessor and perhaps a new scanner to deal with the data during flushing 
> and compaction stages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4767) Network issues can cause persistent RM UI outage

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188159#comment-15188159
 ] 

Hadoop QA commented on YARN-4767:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: patch 
generated 3 new + 63 unchanged - 3 fixed = 66 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
40s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 11s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common-jdk1.8.0_74
 with JDK v1.8.0_74 generated 2 new + 98 unchanged - 2 fixed = 100 total (was 
100) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 18s {color} 
| {color:red} hadoop-yarn-server-web-proxy in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 7s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the

[jira] [Updated] (YARN-4062) Add the flush and compaction functionality via coprocessors and scanners for flow run table

2016-03-09 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-4062:
--
Assignee: Vrushali C  (was: Sangjin Lee)

> Add the flush and compaction functionality via coprocessors and scanners for 
> flow run table
> ---
>
> Key: YARN-4062
> URL: https://issues.apache.org/jira/browse/YARN-4062
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-4062-YARN-2928.04.patch, 
> YARN-4062-YARN-2928.05.patch, YARN-4062-YARN-2928.06.patch, 
> YARN-4062-YARN-2928.07.patch, YARN-4062-YARN-2928.1.patch, 
> YARN-4062-feature-YARN-2928.01.patch, YARN-4062-feature-YARN-2928.02.patch, 
> YARN-4062-feature-YARN-2928.03.patch
>
>
> As part of YARN-3901, coprocessor and scanner is being added for storing into 
> the flow_run table. It also needs a flush & compaction processing in the 
> coprocessor and perhaps a new scanner to deal with the data during flushing 
> and compaction stages. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4784:
---
Fix Version/s: 2.9.0

> fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value
> --
>
> Key: YARN-4784
> URL: https://issues.apache.org/jira/browse/YARN-4784
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, yarn
>Affects Versions: 2.7.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0
>
> Attachments: YARN-4784.001.patch
>
>
> The configure item defaultQueueSchedulingPolicy should not accept fifo as a 
> value since it is an invalid value for non-leaf queues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4784:
---
Attachment: YARN-4784.001.patch

> fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value
> --
>
> Key: YARN-4784
> URL: https://issues.apache.org/jira/browse/YARN-4784
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, yarn
>Affects Versions: 2.7.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Fix For: 2.9.0
>
> Attachments: YARN-4784.001.patch
>
>
> The configure item defaultQueueSchedulingPolicy should not accept fifo as a 
> value since it is an invalid value for non-leaf queues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4696) EntityGroupFSTimelineStore to work in the absence of an RM

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188127#comment-15188127
 ] 

Hadoop QA commented on YARN-4696:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
50s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 4 new + 
242 unchanged - 0 fixed = 246 total (was 242) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 58s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 59s 
{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s 
{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch 
passed with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {col

[jira] [Commented] (YARN-4150) Failure in TestNMClient because nodereports were not available

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188107#comment-15188107
 ] 

Hadoop QA commented on YARN-4150:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 30s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_74. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 43s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 142m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.yarn.client.TestGetGroups |
| JDK v1.8.0_74 Timed out junit tests | 
org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestNMClient |
| JDK v1.7.0_95 Failed junit tests | hadoop.yarn.client.TestGetGroups |
| JDK v1.7.0_95 Timed out junit tests | 
org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache

[jira] [Created] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-4784:
--

 Summary: fair scheduler: defaultQueueSchedulingPolicy should not 
accept fifo as a value
 Key: YARN-4784
 URL: https://issues.apache.org/jira/browse/YARN-4784
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Reporter: Yufei Gu
Assignee: Yufei Gu






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4784:
---
Description: The configure item defaultQueueSchedulingPolicy should not 
accept fifo as a value since it is an invalid value for non-leaf queues.

> fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value
> --
>
> Key: YARN-4784
> URL: https://issues.apache.org/jira/browse/YARN-4784
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, yarn
>Affects Versions: 2.7.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> The configure item defaultQueueSchedulingPolicy should not accept fifo as a 
> value since it is an invalid value for non-leaf queues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4784:
---
Affects Version/s: 2.7.0

> fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value
> --
>
> Key: YARN-4784
> URL: https://issues.apache.org/jira/browse/YARN-4784
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, yarn
>Affects Versions: 2.7.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-03-09 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4784:
---
Component/s: fairscheduler

> fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value
> --
>
> Key: YARN-4784
> URL: https://issues.apache.org/jira/browse/YARN-4784
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, yarn
>Affects Versions: 2.7.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4484) Available Resource calculation for a queue is not correct when used with labels

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15188032#comment-15188032
 ] 

Hadoop QA commented on YARN-4484:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 0 new + 11 unchanged - 2 fixed = 11 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 48s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 20s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 159m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792306/0003-

[jira] [Commented] (YARN-1547) Prevent DoS of ApplicationMasterProtocol by putting in limits

2016-03-09 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187942#comment-15187942
 ] 

Vinod Kumar Vavilapalli commented on YARN-1547:
---

Looked at the PDF attached. Few comments:

As I mentioned before, we should organize code such that we can reuse your 
proposed _DoSInterceptor_ (and may be even the _AbstractRequestInterceptor_ 
framework) to enforce limits on the server-side.

bq. Its request contains information about host, port, and tracking URL.  We 
need to validate them.  One possible solution can be adding a validate inside 
RMWebApp, by using Jetty DoSFilter. 
I don't quite understand this. We should make sure that things like tracking 
URL are limited in length at the time of accepting them into RM - that'd mean 
checks-and-validations in ApplicationMasterService component (the first one in 
RM interacting with AM) itself.

bq. To do this, we can implement a sliding window system that counts the number 
of request per minute and determinates if in a time slot we AM exceeds the 
number or requests.
For most of our limits, sliding-windows won't work. What we need we are static 
limits per application, per ResourceRequest etc.

A general ask: instead of a textual representation of the proposal, I think it 
is much better to reason/discuss about this in a tabular format. For e.g.
||Object||Object Type||Type of Limit to be put||Behavior on limit-overflow||
|RegisterApplicationMasterRequest.trackingURL|String|Size of String|String 
truncated|
|AllocateRequest.ResourceBlacklistRequest.BlacklistAdditions|List of 
Strings|Size of the list, Size of each string|Rejected if list-size overflow, 
truncated strings|

> Prevent DoS of ApplicationMasterProtocol by putting in limits
> -
>
> Key: YARN-1547
> URL: https://issues.apache.org/jira/browse/YARN-1547
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-1547.pdf
>
>
> Points of DoS in ApplicationMasterProtocol
>  - Host and trackingURL in RegisterApplicationMasterRequest
>  - Diagnostics, final trackingURL in FinishApplicationMasterRequest
>  - Unlimited number of resourceAsks, containersToBeReleased and 
> resourceBlacklistRequest in AllocateRequest
> -- Unbounded number of priorities and/or resourceRequests in each ask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3998) Add retry-times to let NM re-launch container when it fails to run

2016-03-09 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187920#comment-15187920
 ] 

Vinod Kumar Vavilapalli commented on YARN-3998:
---

Instead of making it an arbitrary case-by-case per-exit-code behavior, we 
should have a general policy that is easily understood by users.

Namely we should not automatically restart containers for platform errors like 
mis-configuration of YARN, wrong permissions on yarn directories, absent 
user-accounts etc.

This is similar, again, to how we handled AM failures at a higher level via 
YARN-614. 

> Add retry-times to let NM re-launch container when it fails to run
> --
>
> Key: YARN-3998
> URL: https://issues.apache.org/jira/browse/YARN-3998
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-3998.01.patch, YARN-3998.02.patch, 
> YARN-3998.03.patch, YARN-3998.04.patch, YARN-3998.05.patch, YARN-3998.06.patch
>
>
> I'd like to add a field(retry-times) in ContainerLaunchContext. When AM 
> launches containers, it could specify the value. Then NM will re-launch the 
> container 'retry-times' times when it fails to run(e.g.exit code is not 0). 
> It will save a lot of time. It avoids container localization. RM does not 
> need to re-schedule the container. And local files in container's working 
> directory will be left for re-use.(If container have downloaded some big 
> files, it does not need to re-download them when running again.) 
> We find it is useful in systems like Storm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4779) Fix AM container allocation logic in SLS

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187912#comment-15187912
 ] 

Hadoop QA commented on YARN-4779:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-tools/hadoop-sls: patch generated 13 new + 75 
unchanged - 9 fixed = 88 total (was 84) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} hadoop-tools_hadoop-sls-jdk1.8.0_74 with JDK v1.8.0_74 
generated 0 new + 20 unchanged - 3 fixed = 20 total (was 23) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 1s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792322/YARN-4779.3.patch |
| JIRA Issue | YARN-4779 |
| Optional Tests |  asflicense

[jira] [Commented] (YARN-4634) Scheduler UI/Metrics need to consider cases like non-queue label mappings

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187868#comment-15187868
 ] 

Hadoop QA commented on YARN-4634:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 23s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 41s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 150m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792296/0005-YARN-4634.p

[jira] [Commented] (YARN-4108) CapacityScheduler: Improve preemption to preempt only those containers that would satisfy the incoming request

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187838#comment-15187838
 ] 

Hadoop QA commented on YARN-4108:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 15 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 59 new + 477 unchanged - 8 fixed = 536 total (was 485) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 42s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 5s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 164m 33s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org

[jira] [Updated] (YARN-4696) EntityGroupFSTimelineStore to work in the absence of an RM

2016-03-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-4696:
-
Attachment: YARN-4696-014.patch

Patch -014; address feedback

h3. {{FileSystemTimelineWriter.java}}

bq. TIMELINE_SERVICE_ENTITYFILE_FS_SUPPORT_APPEND move to YarnConfiguration?

done

bq. Why LogFDsCache#flush was changed into synchronized? I believe we're doing 
fine-grained locking here (with each of the FDs), and only flush in LogFDsCache 
is marked as synchronized? What am I missing here?

I'm not sure now, I think I was worried about two flush() calls at the same 
time. I've taken it out.

h3. {{TimelineWriter.java}}

bq. Not sure if "Direct timeline writer" is clear enough to indicate where the 
data goes to and which pattern the writer is following? By saying "direct" 
here, do we mean we're using a write-through strategy?

I'd meant not going via the FS, but yes, utterly uninformative, especially 
given we have the URL of the endpoint. Now {{"Timeline writer posting to " + 
resURI}}

h3. {{EntityGroupFSTimelineStore.java}}

bq. In scanActiveLogs, the new variable "scanned" looks like a little bit 
confusing: when we return the variable scanned, the actual scanning jobs are 
not guaranteed to be done. So it looks like something "to be scanned" when we 
return? My only concern is this naming may give people false indication that by 
the time this method returns, there are a number of logs that are already 
scanned. This also applies to EntityLogScanner

now {{logsToScanCount}}

> EntityGroupFSTimelineStore to work in the absence of an RM
> --
>
> Key: YARN-4696
> URL: https://issues.apache.org/jira/browse/YARN-4696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-4696-001.patch, YARN-4696-002.patch, 
> YARN-4696-003.patch, YARN-4696-005.patch, YARN-4696-006.patch, 
> YARN-4696-007.patch, YARN-4696-008.patch, YARN-4696-009.patch, 
> YARN-4696-010.patch, YARN-4696-012.patch, YARN-4696-013.patch, 
> YARN-4696-014.patch
>
>
> {{EntityGroupFSTimelineStore}} now depends on an RM being up and running; the 
> configuration pointing to it. This is a new change, and impacts testing where 
> you have historically been able to test without an RM running.
> The sole purpose of the probe is to automatically determine if an app is 
> running; it falls back to "unknown" if not. If the RM connection was 
> optional, the "unknown" codepath could be called directly, relying on age of 
> file as a metric of completion
> Options
> # add a flag to disable RM connect
> # skip automatically if RM not defined/set to 0.0.0.0
> # disable retries on yarn client IPC; if it fails, tag app as unknown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4560) Make scheduler error checking message more user friendly

2016-03-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187783#comment-15187783
 ] 

Daniel Templeton commented on YARN-4560:


Looks good to me. +1 (non-binding).  [~kasha]?

> Make scheduler error checking message more user friendly
> 
>
> Key: YARN-4560
> URL: https://issues.apache.org/jira/browse/YARN-4560
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Trivial
>  Labels: supportability
> Attachments: YARN-4560.001.patch
>
>
> If the YARN properties below are poorly configured:
> {code}
> yarn.scheduler.minimum-allocation-mb
> yarn.scheduler.maximum-allocation-mb
> {code}
> The error message that shows up in the RM is:
> {panel}
> 2016-01-07 14:47:03,711 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid resource 
> scheduler memory allocation configuration, 
> yarn.scheduler.minimum-allocation-mb=-1, 
> yarn.scheduler.maximum-allocation-mb=-3, min should equal greater than 0, max 
> should be no smaller than min.
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.validateConf(FairScheduler.java:215)
> {panel}
> While it's technically correct, it's not very user friendly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4541) Change log message in LocalizedResource#handle() to DEBUG

2016-03-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187774#comment-15187774
 ] 

Daniel Templeton commented on YARN-4541:


Looks good to me. [~kasha]?

> Change log message in LocalizedResource#handle() to DEBUG
> -
>
> Key: YARN-4541
> URL: https://issues.apache.org/jira/browse/YARN-4541
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Attachments: YARN-4541.001.patch, YARN-4541.002.patch
>
>
> This section of code can fill up a log fairly quickly.
> {code}
>if (oldState != newState) {
> LOG.info("Resource " + resourcePath + (localPath != null ?
>   "(->" + localPath + ")": "") + " transitioned from " + oldState
> + " to " + newState);
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4779) Fix AM container allocation logic in SLS

2016-03-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4779:
-
Attachment: YARN-4779.3.patch

Attached ver.3 patch.

> Fix AM container allocation logic in SLS
> 
>
> Key: YARN-4779
> URL: https://issues.apache.org/jira/browse/YARN-4779
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4779.1.patch, YARN-4779.2.patch, YARN-4779.3.patch
>
>
> Currently, SLS uses unmanaged AM for simulated map-reduce applications. And 
> first allocated container for each app is considered to be the master 
> container.
> This could be problematic when preemption happens. CapacityScheduler preempt 
> AM container at lowest priority, but the simulated AM container isn't 
> recognized by scheduler -- it is a normal container from scheduler's 
> perspective.
> This JIRA tries to fix this logic: do the real AM allocation instead of using 
> unmanaged AM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4634) Scheduler UI/Metrics need to consider cases like non-queue label mappings

2016-03-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187705#comment-15187705
 ] 

Wangda Tan commented on YARN-4634:
--

Looks good, +1.

> Scheduler UI/Metrics need to consider cases like non-queue label mappings
> -
>
> Key: YARN-4634
> URL: https://issues.apache.org/jira/browse/YARN-4634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4634.patch, 0002-YARN-4634.patch, 
> 0003-YARN-4634.patch, 0004-YARN-4634.patch, 0005-YARN-4634.patch
>
>
> Currently when label-queue mappings are not available, there are few 
> assumptions taken in UI and in metrics.
> In above case where labels are enabled and available in cluster but without 
> any queue mappings, UI displays queues under labels. This is not correct.
> Currently  labels enabled check and availability of labels are considered to 
> render scheduler UI. Henceforth we also need to check whether 
> - queue-mappings are available
> - nodes are mapped with labels with proper exclusivity flags on
> This ticket also will try to see the default configurations in queue when 
> labels are not mapped. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4150) Failure in TestNMClient because nodereports were not available

2016-03-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187665#comment-15187665
 ] 

Daniel Templeton commented on YARN-4150:


Looks good to me. [~rkanter]?

> Failure in TestNMClient because nodereports were not available
> --
>
> Key: YARN-4150
> URL: https://issues.apache.org/jira/browse/YARN-4150
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-4150.001.patch
>
>
> Saw a failure in a test run
> https://builds.apache.org/job/PreCommit-YARN-Build/9010/testReport/
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>   at java.util.ArrayList.rangeCheck(ArrayList.java:635)
>   at java.util.ArrayList.get(ArrayList.java:411)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.allocateContainers(TestNMClient.java:244)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestNMClient.testNMClientNoCleanupOnStop(TestNMClient.java:210)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4767) Network issues can cause persistent RM UI outage

2016-03-09 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-4767:
---
Attachment: YARN-4767.002.patch

This patch also works for single-node clusters.  The basic idea is that the 
{{AmIpFilter}} appends /redirect to the URL when it redirects to the web proxy. 
 (It only does that when it's contacted by a host other than the web proxy.)  
The web proxy looks for the /redirect when it receives a request.  If it's 
there, and if the request originated from the web proxy itself, it instead 
redirects to an error page, ending the endless loop.

The error page is implemented within the app page and is triggered by appending 
an "R" to the end of the app ID.  If the app page sees the "R", it prints the 
error instead of the app page.  This part is the part I'm least confident about 
being a reasonable thing to do.  Feedback welcome.

> Network issues can cause persistent RM UI outage
> 
>
> Key: YARN-4767
> URL: https://issues.apache.org/jira/browse/YARN-4767
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-4767.001.patch, YARN-4767.002.patch
>
>
> If a network issue causes an AM web app to resolve the RM proxy's address to 
> something other than what's listed in the allowed proxies list, the 
> AmIpFilter will 302 redirect the RM proxy's request back to the RM proxy.  
> The RM proxy will then consume all available handler threads connecting to 
> itself over and over, resulting in an outage of the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2016-03-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187631#comment-15187631
 ] 

Wangda Tan commented on YARN-4781:
--

[~sunilg],

I added simple description to the JIRA. Existing PCPP is inter-queue 
preemption, it only happens when one queue's usage beyond its limit and another 
queue is requesting more resources. This JIRA I want to let preemption kick in 
when application usages within a queue need rebalance.

> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> We introduced fairness queue policy since YARN-3319, which will let large 
> applications make progresses and not starve small applications. However, if a 
> large application takes the queue’s resources, and containers of the large 
> app has long lifespan, small applications could still wait for resources for 
> long time and SLAs cannot be guaranteed.
> Instead of wait for application release resources on their own, we need to 
> preempt resources of queue with fairness policy enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2016-03-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4781:
-
Description: 
We introduced fairness queue policy since YARN-3319, which will let large 
applications make progresses and not starve small applications. However, if a 
large application takes the queue’s resources, and containers of the large app 
has long lifespan, small applications could still wait for resources for long 
time and SLAs cannot be guaranteed.

Instead of wait for application release resources on their own, we need to 
preempt resources of queue with fairness policy enabled.

  was:
We introduced fairness queue policy since YARN-, which will let large 
applications make progresses and not starve small applications. However, if a 
large application takes the queue’s resources, and containers of the large app 
has long lifespan, small applications could still wait for resources for long 
time and SLAs cannot be guaranteed.

Instead of wait for application release resources on their own, we need to 
preempt resources of queue with fairness policy enabled.


> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> We introduced fairness queue policy since YARN-3319, which will let large 
> applications make progresses and not starve small applications. However, if a 
> large application takes the queue’s resources, and containers of the large 
> app has long lifespan, small applications could still wait for resources for 
> long time and SLAs cannot be guaranteed.
> Instead of wait for application release resources on their own, we need to 
> preempt resources of queue with fairness policy enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2016-03-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4781:
-
Description: 
We introduced fairness queue policy since YARN-, which will let large 
applications make progresses and not starve small applications. However, if a 
large application takes the queue’s resources, and containers of the large app 
has long lifespan, small applications could still wait for resources for long 
time and SLAs cannot be guaranteed.

Instead of wait for application release resources on their own, we need to 
preempt resources of queue with fairness policy enabled.

> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> We introduced fairness queue policy since YARN-, which will let large 
> applications make progresses and not starve small applications. However, if a 
> large application takes the queue’s resources, and containers of the large 
> app has long lifespan, small applications could still wait for resources for 
> long time and SLAs cannot be guaranteed.
> Instead of wait for application release resources on their own, we need to 
> preempt resources of queue with fairness policy enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4699) Scheduler UI and REST o/p is not in sync when -replaceLabelsOnNode is used to change label of a node

2016-03-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187580#comment-15187580
 ] 

Sunil G commented on YARN-4699:
---

HI [~leftnoteasy]
This patch solves all issues related to capacity when we do 
{{-replaceLabelsOnNode}} on running and used NodeManager. We had to update 
capacity also which was missing. 

Could pls share your thoughts.

> Scheduler UI and REST o/p is not in sync when -replaceLabelsOnNode is used to 
> change label of a node
> 
>
> Key: YARN-4699
> URL: https://issues.apache.org/jira/browse/YARN-4699
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.7.2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: 0001-YARN-4699.patch, AfterAppFInish-LabelY-Metrics.png, 
> ForLabelX-AfterSwitch.png, ForLabelY-AfterSwitch.png
>
>
> Scenario is as follows:
> a. 2 nodes are available in the cluster (node1 with label "x", node2 with 
> label "y")
> b. Submit an application to node1 for label "x". 
> c. Change node1 label to "y" by using *replaceLabelsOnNode* command.
> d. Verify Scheduler UI for metrics such as "Used Capacity", "Absolute 
> Capacity" etc. "x" still shows some capacity.
> e. Change node1 label back to "x" and verify UI and REST o/p
> Output:
> 1. "Used Capacity", "Absolute Capacity" etc are not decremented once labels 
> is changed for a node.
> 2. UI tab for respective label shows wrong GREEN color in these cases.
> 3. REST o/p is wrong for each label after executing above scenario.
> Attaching screen shots also. This ticket will try to cover UI and REST o/p 
> fix when label is changed runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4484) Available Resource calculation for a queue is not correct when used with labels

2016-03-09 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4484:
--
Attachment: 0003-YARN-4484-v2.patch

Attaching patch with comment change. [~leftnoteasy] could you please take a 
look.

> Available Resource calculation for a queue is not correct when used with 
> labels
> ---
>
> Key: YARN-4484
> URL: https://issues.apache.org/jira/browse/YARN-4484
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4484.patch, 0002-YARN-4484.patch, 
> 0003-YARN-4484-v2.patch, 0003-YARN-4484.patch
>
>
> To calculate available resource for a queue, we have to get the total 
> resource allocated for all labels in queue compare to its usage. 
> Also address the comments given in 
> [YARN-4304-comments|https://issues.apache.org/jira/browse/YARN-4304?focusedCommentId=15064874&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15064874
>  ] given by [~leftnoteasy] for same.
> ClusterMetrics related issues will also get handled once we fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4308) ContainersAggregated CPU resource utilization reports negative usage in first few heartbeats

2016-03-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187558#comment-15187558
 ] 

Sunil G commented on YARN-4308:
---

Hi [~djp] , [~vvasudev]
Recently we had few discussion in one of the ATS metrics jira YARN-4172 
regarding -1 handling for CPU usage.

Agreeing that we need to send -1 when there are no reading available, I would 
like to point 2 cases here;
   1. When cpu sample is taken for {{first time}}, current code snippet in 
{{CpuTimeTracker}} is sending -1. In such cases, its debatable that whether we 
can send -1 for this case or not. May be we could start with 0 or even we can 
wait for a cycle to report back.
   2. If {{CpuTimeTracker#getCpuTrackerUsagePercent}} returns -1, we can send 
the reading as it is back to caller. There is no need to operate on same. 
{{ResourceCalculatorProcessTree.UNAVAILABLE}} can be returned as CPU usage. 

If thoughts are same, I can update a new patch.

> ContainersAggregated CPU resource utilization reports negative usage in first 
> few heartbeats
> 
>
> Key: YARN-4308
> URL: https://issues.apache.org/jira/browse/YARN-4308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4308.patch
>
>
> NodeManager reports ContainerAggregated CPU resource utilization as -ve value 
> in first few heartbeats cycles. I added a new debug print and received below 
> values from heartbeats.
> {noformat}
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  ContainersResource Utilization : CpuTrackerUsagePercent : -1.0 
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:ContainersResource
>  Utilization :  CpuTrackerUsagePercent : 198.94598
> {noformat}
> Its better we send 0 as CPU usage rather than sending a negative values in 
> heartbeats eventhough its happening in only first few heartbeats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4634) Scheduler UI/Metrics need to consider cases like non-queue label mappings

2016-03-09 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-4634:
--
Attachment: 0005-YARN-4634.patch

Attaching an updated patch as discussed. [~leftnoteasy], could you please take 
a look.

> Scheduler UI/Metrics need to consider cases like non-queue label mappings
> -
>
> Key: YARN-4634
> URL: https://issues.apache.org/jira/browse/YARN-4634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4634.patch, 0002-YARN-4634.patch, 
> 0003-YARN-4634.patch, 0004-YARN-4634.patch, 0005-YARN-4634.patch
>
>
> Currently when label-queue mappings are not available, there are few 
> assumptions taken in UI and in metrics.
> In above case where labels are enabled and available in cluster but without 
> any queue mappings, UI displays queues under labels. This is not correct.
> Currently  labels enabled check and availability of labels are considered to 
> render scheduler UI. Henceforth we also need to check whether 
> - queue-mappings are available
> - nodes are mapped with labels with proper exclusivity flags on
> This ticket also will try to see the default configurations in queue when 
> labels are not mapped. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4779) Fix AM container allocation logic in SLS

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187513#comment-15187513
 ] 

Hadoop QA commented on YARN-4779:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-tools/hadoop-sls: patch generated 13 new + 75 
unchanged - 9 fixed = 88 total (was 84) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} hadoop-tools_hadoop-sls-jdk1.8.0_74 with JDK v1.8.0_74 
generated 0 new + 20 unchanged - 3 fixed = 20 total (was 23) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 49s {color} 
| {color:red} hadoop-sls in the patch failed with JDK v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 52s {color} 
| {color:red} hadoop-sls in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 12s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | hadoop.yarn.sls.appmaster.TestAMSimulator |
|   | hadoop.yarn.sls.scheduler.TestTaskRunner |
| JDK v1.7.0_95 Failed junit tests | hadoop.yarn.sls.appmaster.TestAMSimulator |
\\
\\
|| Subs

[jira] [Assigned] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2016-03-09 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri reassigned YARN-4511:
-

Assignee: Karthik Kambatla  (was: Inigo Goiri)

> Common scheduler changes supporting scheduler-specific implementations
> --
>
> Key: YARN-4511
> URL: https://issues.apache.org/jira/browse/YARN-4511
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Karthik Kambatla
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4781) Support intra-queue preemption for fairness ordering policy.

2016-03-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187482#comment-15187482
 ] 

Sunil G commented on YARN-4781:
---

Hi [~leftnoteasy]
Trying to understand the context here, current PCPP is trying to preempt 
intra-queue preemption if one/more queues are under served. This is when we use 
policies such as Fifo or Fair. With this new ordering policy, will this be 
handled by scheduler itself while allocation time? Could you pls help to share 
some details.

> Support intra-queue preemption for fairness ordering policy.
> 
>
> Key: YARN-4781
> URL: https://issues.apache.org/jira/browse/YARN-4781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4779) Fix AM container allocation logic in SLS

2016-03-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4779:
-
Attachment: YARN-4779.2.patch

Fixed warnings.

> Fix AM container allocation logic in SLS
> 
>
> Key: YARN-4779
> URL: https://issues.apache.org/jira/browse/YARN-4779
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4779.1.patch, YARN-4779.2.patch
>
>
> Currently, SLS uses unmanaged AM for simulated map-reduce applications. And 
> first allocated container for each app is considered to be the master 
> container.
> This could be problematic when preemption happens. CapacityScheduler preempt 
> AM container at lowest priority, but the simulated AM container isn't 
> recognized by scheduler -- it is a normal container from scheduler's 
> perspective.
> This JIRA tries to fix this logic: do the real AM allocation instead of using 
> unmanaged AM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4108) CapacityScheduler: Improve preemption to preempt only those containers that would satisfy the incoming request

2016-03-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4108:
-
Attachment: YARN-4108.6.patch

> CapacityScheduler: Improve preemption to preempt only those containers that 
> would satisfy the incoming request
> --
>
> Key: YARN-4108
> URL: https://issues.apache.org/jira/browse/YARN-4108
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4108-design-doc-V3.pdf, 
> YARN-4108-design-doc-v1.pdf, YARN-4108-design-doc-v2.pdf, YARN-4108.1.patch, 
> YARN-4108.2.patch, YARN-4108.3.patch, YARN-4108.4.patch, YARN-4108.5.patch, 
> YARN-4108.6.patch, YARN-4108.poc.1.patch, YARN-4108.poc.2-WIP.patch, 
> YARN-4108.poc.3-WIP.patch, YARN-4108.poc.4-WIP.patch
>
>
> This is sibling JIRA for YARN-2154. We should make sure container preemption 
> is more effective.
> *Requirements:*:
> 1) Can handle case of user-limit preemption
> 2) Can handle case of resource placement requirements, such as: hard-locality 
> (I only want to use rack-1) / node-constraints (YARN-3409) / black-list (I 
> don't want to use rack1 and host\[1-3\])
> 3) Can handle preemption within a queue: cross user preemption (YARN-2113), 
> cross applicaiton preemption (such as priority-based (YARN-1963) / 
> fairness-based (YARN-3319)).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1961) Fair scheduler preemption doesn't work for non-leaf queues

2016-03-09 Thread Ashwin Shankar (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187429#comment-15187429
 ] 

Ashwin Shankar commented on YARN-1961:
--

Sure, please feel to close this.

> Fair scheduler preemption doesn't work for non-leaf queues
> --
>
> Key: YARN-1961
> URL: https://issues.apache.org/jira/browse/YARN-1961
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, scheduler
>Affects Versions: 2.4.0
>Reporter: Ashwin Shankar
>Assignee: Ashwin Shankar
>  Labels: scheduler
>
> Setting minResources and minSharePreemptionTimeout to a non-leaf queue 
> doesn't cause preemption to happen when that non-leaf queue is below 
> minResources and there are outstanding demands in that non-leaf queue.
> Here is an example fs allocation config(partial) :
> {code:xml}
> 
>   3072 mb,0 vcores
>   30
> 
> 
> 
> 
>  
>  {code}
> With the above configs,preemption doesn't seem to happen if queue abc is 
> below minShare and it has outstanding unsatisfied demands from apps in its 
> child queues. Ideally in such cases we would like preemption to kick off and 
> reclaim resources from other queues(not under queue abc).
> Looking at the code it seems like preemption checks for starvation only at 
> the leaf queue level and not at the parent level.
> {code:title=FairScheduler.java|borderStyle=solid}
> boolean isStarvedForMinShare(FSLeafQueue sched)
> boolean isStarvedForFairShare(FSLeafQueue sched)
> {code}
> This affects our use case where we have a parent queue with probably a 100 
> unconfigured leaf queues under it.We want to give a minshare to the parent 
> queue to protect all the leaf queues under it,but we cannot do it due to this 
> bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1961) Fair scheduler preemption doesn't work for non-leaf queues

2016-03-09 Thread Ashwin Shankar (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187431#comment-15187431
 ] 

Ashwin Shankar commented on YARN-1961:
--

s/feel/feel free

> Fair scheduler preemption doesn't work for non-leaf queues
> --
>
> Key: YARN-1961
> URL: https://issues.apache.org/jira/browse/YARN-1961
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, scheduler
>Affects Versions: 2.4.0
>Reporter: Ashwin Shankar
>Assignee: Ashwin Shankar
>  Labels: scheduler
>
> Setting minResources and minSharePreemptionTimeout to a non-leaf queue 
> doesn't cause preemption to happen when that non-leaf queue is below 
> minResources and there are outstanding demands in that non-leaf queue.
> Here is an example fs allocation config(partial) :
> {code:xml}
> 
>   3072 mb,0 vcores
>   30
> 
> 
> 
> 
>  
>  {code}
> With the above configs,preemption doesn't seem to happen if queue abc is 
> below minShare and it has outstanding unsatisfied demands from apps in its 
> child queues. Ideally in such cases we would like preemption to kick off and 
> reclaim resources from other queues(not under queue abc).
> Looking at the code it seems like preemption checks for starvation only at 
> the leaf queue level and not at the parent level.
> {code:title=FairScheduler.java|borderStyle=solid}
> boolean isStarvedForMinShare(FSLeafQueue sched)
> boolean isStarvedForFairShare(FSLeafQueue sched)
> {code}
> This affects our use case where we have a parent queue with probably a 100 
> unconfigured leaf queues under it.We want to give a minshare to the parent 
> queue to protect all the leaf queues under it,but we cannot do it due to this 
> bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4634) Scheduler UI/Metrics need to consider cases like non-queue label mappings

2016-03-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187430#comment-15187430
 ] 

Sunil G commented on YARN-4634:
---

Yes. This looks fine.  Bcz all other cases will be handle with this. I ll 
update the patch. 

> Scheduler UI/Metrics need to consider cases like non-queue label mappings
> -
>
> Key: YARN-4634
> URL: https://issues.apache.org/jira/browse/YARN-4634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-4634.patch, 0002-YARN-4634.patch, 
> 0003-YARN-4634.patch, 0004-YARN-4634.patch
>
>
> Currently when label-queue mappings are not available, there are few 
> assumptions taken in UI and in metrics.
> In above case where labels are enabled and available in cluster but without 
> any queue mappings, UI displays queues under labels. This is not correct.
> Currently  labels enabled check and availability of labels are considered to 
> render scheduler UI. Henceforth we also need to check whether 
> - queue-mappings are available
> - nodes are mapped with labels with proper exclusivity flags on
> This ticket also will try to see the default configurations in queue when 
> labels are not mapped. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-4773) Log aggregation performs extraneous filesystem operations when rolling log aggregation is disabled

2016-03-09 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-4773:


Assignee: Jun Gong

Feel free, as I'm currently busy with other tasks.  I filed it and left it 
unassigned in the hope someone in the community would have the time to pick it 
up, so thanks!

> Log aggregation performs extraneous filesystem operations when rolling log 
> aggregation is disabled
> --
>
> Key: YARN-4773
> URL: https://issues.apache.org/jira/browse/YARN-4773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Jun Gong
>Priority: Minor
>
> I noticed when log aggregation occurs for an application the nodemanager is 
> listing the application's log directory in HDFS.  Apparently this is for 
> removing old logs before uploading new ones.  This is a wasteful operation 
> when rolling log aggregation is disabled, since there will be no prior logs 
> in HDFS -- aggregation only occurs once when rolling log aggregation is 
> disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-4759) Revisit signalContainer() for docker containers

2016-03-09 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf reassigned YARN-4759:
-

Assignee: Shane Kumpf

> Revisit signalContainer() for docker containers
> ---
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Shane Kumpf
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4773) Log aggregation performs extraneous filesystem operations when rolling log aggregation is disabled

2016-03-09 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187277#comment-15187277
 ] 

Jun Gong commented on YARN-4773:


Thanks [~jlowe] for confirming it. I will attach a patch to fix it if you are 
not working on it.

> Log aggregation performs extraneous filesystem operations when rolling log 
> aggregation is disabled
> --
>
> Key: YARN-4773
> URL: https://issues.apache.org/jira/browse/YARN-4773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Priority: Minor
>
> I noticed when log aggregation occurs for an application the nodemanager is 
> listing the application's log directory in HDFS.  Apparently this is for 
> removing old logs before uploading new ones.  This is a wasteful operation 
> when rolling log aggregation is disabled, since there will be no prior logs 
> in HDFS -- aggregation only occurs once when rolling log aggregation is 
> disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4773) Log aggregation performs extraneous filesystem operations when rolling log aggregation is disabled

2016-03-09 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187270#comment-15187270
 ] 

Jason Lowe commented on YARN-4773:
--

Yes, [~hex108] that is the scenario.

> Log aggregation performs extraneous filesystem operations when rolling log 
> aggregation is disabled
> --
>
> Key: YARN-4773
> URL: https://issues.apache.org/jira/browse/YARN-4773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Priority: Minor
>
> I noticed when log aggregation occurs for an application the nodemanager is 
> listing the application's log directory in HDFS.  Apparently this is for 
> removing old logs before uploading new ones.  This is a wasteful operation 
> when rolling log aggregation is disabled, since there will be no prior logs 
> in HDFS -- aggregation only occurs once when rolling log aggregation is 
> disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4759) Revisit signalContainer() for docker containers

2016-03-09 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187259#comment-15187259
 ] 

Shane Kumpf commented on YARN-4759:
---

I'd like to work on this and will be taking ownership.

> Revisit signalContainer() for docker containers
> ---
>
> Key: YARN-4759
> URL: https://issues.apache.org/jira/browse/YARN-4759
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>
> The current signal handling (in the DockerContainerRuntime) needs to be 
> revisited for docker containers. For example, container reacquisition on NM 
> restart might not work, depending on which user the process in the container 
> runs as. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4753) Use doxia macro to generate in-page TOC of YARN site documentation

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187253#comment-15187253
 ] 

Hadoop QA commented on YARN-4753:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 7m 32s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792269/YARN-4753.001.patch |
| JIRA Issue | YARN-4753 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 7f9e768806a5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2e040d3 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/10738/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Use doxia macro to generate in-page TOC of YARN site documentation
> --
>
> Key: YARN-4753
> URL: https://issues.apache.org/jira/browse/YARN-4753
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: YARN-4753.001.patch
>
>
> Since maven-site-plugin 3.5 was released, we can use toc macro in Markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4773) Log aggregation performs extraneous filesystem operations when rolling log aggregation is disabled

2016-03-09 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187250#comment-15187250
 ] 

Jun Gong commented on YARN-4773:


Thanks [~jlowe] for reporting the issue. 

YARN-4720 tries to skip unnecessary NN operations for every call 
*AppLogAggregatorImpl#uploadLogsForContainers* if pendingContainerInThisCycle 
is empty. IIUC [~jlowe] means this case:  rolling log aggregation is disabled, 
when app completes, we call 
*AppLogAggregatorImpl#uploadLogsForContainers(true)*, and 
*pendingContainerInThisCycle* is not empty, then we will call 
*AppLogAggregatorImpl#cleanOldLogs*, however we do not need call 
*AppLogAggregatorImpl#cleanOldLogs* because there have been no containers' logs 
uploaded before.

> Log aggregation performs extraneous filesystem operations when rolling log 
> aggregation is disabled
> --
>
> Key: YARN-4773
> URL: https://issues.apache.org/jira/browse/YARN-4773
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Priority: Minor
>
> I noticed when log aggregation occurs for an application the nodemanager is 
> listing the application's log directory in HDFS.  Apparently this is for 
> removing old logs before uploading new ones.  This is a wasteful operation 
> when rolling log aggregation is disabled, since there will be no prior logs 
> in HDFS -- aggregation only occurs once when rolling log aggregation is 
> disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4753) Use doxia macro to generate in-page TOC of YARN site documentation

2016-03-09 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated YARN-4753:
---
Attachment: YARN-4753.001.patch

Since HADOOP-12470 came in, we can use toc macro in YARN docs.

> Use doxia macro to generate in-page TOC of YARN site documentation
> --
>
> Key: YARN-4753
> URL: https://issues.apache.org/jira/browse/YARN-4753
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: YARN-4753.001.patch
>
>
> Since maven-site-plugin 3.5 was released, we can use toc macro in Markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1473) Exception from container-launch(Apache Hadoop 2.2.0)

2016-03-09 Thread Nishant Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187177#comment-15187177
 ] 

Nishant Sharma commented on YARN-1473:
--

I am also getting same error.
What is the solution?
Please answer...


> Exception from container-launch(Apache Hadoop 2.2.0)
> 
>
> Key: YARN-1473
> URL: https://issues.apache.org/jira/browse/YARN-1473
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: CentOS5.8 and Apache Hadoop 2.2.0
>Reporter: Joy Xu
> Attachments: yarn-site.xml
>
>
> Hello all,
> I have meet a exception from container-launch when I run the built-in 
> wordcount program .and the error messge as follow:
> {code}
> 13/12/05 00:17:31 INFO mapreduce.Job: Job job_1386171829089_0003 failed with 
> state FAILED due to: Application application_1386171829089_0003 failed 2 
> times due to AM Container for appattempt_1386171829089_0003_02 exited 
> with  exitCode: 1 due to: Exception from container-launch: 
> org.apache.hadoop.util.Shell$ExitCodeException: 
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
>   at org.apache.hadoop.util.Shell.run(Shell.java:379)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>   at java.lang.Thread.run(Thread.java:662)
> .Failing this attempt.. Failing the application.
> 13/12/05 00:17:31 INFO mapreduce.Job: Counters: 0
> {code}
> Hope someone can Help. Thx.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4767) Network issues can cause persistent RM UI outage

2016-03-09 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187173#comment-15187173
 ] 

Daniel Templeton commented on YARN-4767:


[~wilfreds], no that won't solve the issue.  The problem is that it's not a 
circular redirect.  A circular redirect is A->A->A...  This is a loop of: A 
contacts B and is redirected to A.  From the {{HttpClient}}'s perspective, 
there's nothing wonky going on here.

> Network issues can cause persistent RM UI outage
> 
>
> Key: YARN-4767
> URL: https://issues.apache.org/jira/browse/YARN-4767
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: YARN-4767.001.patch
>
>
> If a network issue causes an AM web app to resolve the RM proxy's address to 
> something other than what's listed in the allowed proxies list, the 
> AmIpFilter will 302 redirect the RM proxy's request back to the RM proxy.  
> The RM proxy will then consume all available handler threads connecting to 
> itself over and over, resulting in an outage of the web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3863) Support complex filters in TimelineReader

2016-03-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187168#comment-15187168
 ] 

Varun Saxena commented on YARN-3863:


[~sjlee0], [~djp], kindly review.

> Support complex filters in TimelineReader
> -
>
> Key: YARN-3863
> URL: https://issues.apache.org/jira/browse/YARN-3863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3863-YARN-2928.v2.01.patch, 
> YARN-3863-YARN-2928.v2.02.patch, YARN-3863-YARN-2928.v2.03.patch, 
> YARN-3863-YARN-2928.v2.04.patch, YARN-3863-feature-YARN-2928.wip.003.patch, 
> YARN-3863-feature-YARN-2928.wip.01.patch, 
> YARN-3863-feature-YARN-2928.wip.02.patch, 
> YARN-3863-feature-YARN-2928.wip.04.patch, 
> YARN-3863-feature-YARN-2928.wip.05.patch
>
>
> Currently filters in timeline reader will return an entity only if all the 
> filter conditions hold true i.e. only AND operation is supported. We can 
> support OR operation for the filters as well. Additionally as primary backend 
> implementation is HBase, we can design our filters in a manner, where they 
> closely resemble HBase Filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3863) Support complex filters in TimelineReader

2016-03-09 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3863:
---
Attachment: (was: YARN-3863-YARN-2928.v2.04.patch)

> Support complex filters in TimelineReader
> -
>
> Key: YARN-3863
> URL: https://issues.apache.org/jira/browse/YARN-3863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3863-YARN-2928.v2.01.patch, 
> YARN-3863-YARN-2928.v2.02.patch, YARN-3863-YARN-2928.v2.03.patch, 
> YARN-3863-YARN-2928.v2.04.patch, YARN-3863-feature-YARN-2928.wip.003.patch, 
> YARN-3863-feature-YARN-2928.wip.01.patch, 
> YARN-3863-feature-YARN-2928.wip.02.patch, 
> YARN-3863-feature-YARN-2928.wip.04.patch, 
> YARN-3863-feature-YARN-2928.wip.05.patch
>
>
> Currently filters in timeline reader will return an entity only if all the 
> filter conditions hold true i.e. only AND operation is supported. We can 
> support OR operation for the filters as well. Additionally as primary backend 
> implementation is HBase, we can design our filters in a manner, where they 
> closely resemble HBase Filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4783) Log aggregation failure for application when Nodemanager is restarted

2016-03-09 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187137#comment-15187137
 ] 

Jason Lowe commented on YARN-4783:
--

>From the exception it appears the HDFS token is being cancelled before the 
>nodemanager gets around to aggregating the logs.  Without the valid HDFS token 
>the NM cannot perform log aggregation.  I remember there was some token 
>keepalive semantics in the the RM-NM protocol so nodemanagers could ask for 
>tokens to be kept alive after the application completed to perform cleanup 
>tasks like log aggregation.  However if the nodemanager was down too long then 
>that window will be missed and the RM will cancel the HDFS token.

The RM logs should shed some light on exactly what happened.  It should show 
the relative timing of the following events which would be interesting to know:
# When the application completed
# When the HDFS token was cancelled by the RM (check for 'Cancelling 
HDFS_DELEGATION_TOKEN token 9 for yarn')
# When the nodemanager reconnected to the RM (and presumably started log 
aggregation shortly afterwards)


> Log aggregation failure for application when Nodemanager is restarted 
> --
>
> Key: YARN-4783
> URL: https://issues.apache.org/jira/browse/YARN-4783
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Surendra Singh Lilhore
>
> Scenario :
> =
> 1.Start NM with user dsperf:hadoop
> 2.Configure linux-execute user as dsperf
> 3.Submit application with yarn user 
> 4.Once few containers are allocated to NM 1
> 5.Nodemanager 1 is stopped  (wait for expiry )
> 6.Start node manager after application is completed
> 7.Check the log aggregation is happening for the containers log in NMLocal 
> directory
> Expect Output :
> ===
> Log aggregation should be succesfull
> Actual Output :
> ===
> Log aggreation not successfull



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3863) Support complex filters in TimelineReader

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187132#comment-15187132
 ] 

Hadoop QA commented on YARN-3863:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
13s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 patch generated 5 new + 4 unchanged - 1 fixed = 9 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 2s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 0s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12792251/YARN-3863-YARN-2928.v2.04.patch
 |
| JIRA Issue | YARN-3863 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2e47e5afa7dd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Bu

[jira] [Updated] (YARN-3863) Support complex filters in TimelineReader

2016-03-09 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3863:
---
Attachment: YARN-3863-YARN-2928.v2.04.patch

> Support complex filters in TimelineReader
> -
>
> Key: YARN-3863
> URL: https://issues.apache.org/jira/browse/YARN-3863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3863-YARN-2928.v2.01.patch, 
> YARN-3863-YARN-2928.v2.02.patch, YARN-3863-YARN-2928.v2.03.patch, 
> YARN-3863-YARN-2928.v2.04.patch, YARN-3863-YARN-2928.v2.04.patch, 
> YARN-3863-feature-YARN-2928.wip.003.patch, 
> YARN-3863-feature-YARN-2928.wip.01.patch, 
> YARN-3863-feature-YARN-2928.wip.02.patch, 
> YARN-3863-feature-YARN-2928.wip.04.patch, 
> YARN-3863-feature-YARN-2928.wip.05.patch
>
>
> Currently filters in timeline reader will return an entity only if all the 
> filter conditions hold true i.e. only AND operation is supported. We can 
> support OR operation for the filters as well. Additionally as primary backend 
> implementation is HBase, we can design our filters in a manner, where they 
> closely resemble HBase Filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3863) Support complex filters in TimelineReader

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15187038#comment-15187038
 ] 

Hadoop QA commented on YARN-3863:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
25s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
35s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice:
 patch generated 11 new + 4 unchanged - 1 fixed = 15 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 45s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 2s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 9s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
|  |  Format-string method String.format(String, Object[]) called with format 
string "%s (%s, %s:with format string "%s (%s, %s:%s:)" wants 4 arguments but 
is given 5 in 
org.apache.hadoop.yarn.server.tim

[jira] [Commented] (YARN-868) YarnClient should set the service address in tokens returned by getRMDelegationToken()

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15186944#comment-15186944
 ] 

Hadoop QA commented on YARN-868:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 3s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 3 new + 
22 unchanged - 0 fixed = 25 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
26s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 2m 14s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-jdk1.8.0_74 with JDK v1.8.0_74 
generated 1 new + 99 unchanged - 1 fixed = 100 total (was 100) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 31s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.7.0_95. 

[jira] [Updated] (YARN-3863) Support complex filters in TimelineReader

2016-03-09 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-3863:
---
Attachment: YARN-3863-YARN-2928.v2.04.patch

> Support complex filters in TimelineReader
> -
>
> Key: YARN-3863
> URL: https://issues.apache.org/jira/browse/YARN-3863
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-3863-YARN-2928.v2.01.patch, 
> YARN-3863-YARN-2928.v2.02.patch, YARN-3863-YARN-2928.v2.03.patch, 
> YARN-3863-YARN-2928.v2.04.patch, YARN-3863-feature-YARN-2928.wip.003.patch, 
> YARN-3863-feature-YARN-2928.wip.01.patch, 
> YARN-3863-feature-YARN-2928.wip.02.patch, 
> YARN-3863-feature-YARN-2928.wip.04.patch, 
> YARN-3863-feature-YARN-2928.wip.05.patch
>
>
> Currently filters in timeline reader will return an entity only if all the 
> filter conditions hold true i.e. only AND operation is supported. We can 
> support OR operation for the filters as well. Additionally as primary backend 
> implementation is HBase, we can design our filters in a manner, where they 
> closely resemble HBase Filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4545) Allow YARN distributed shell to use ATS v1.5 APIs

2016-03-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15186913#comment-15186913
 ] 

Steve Loughran commented on YARN-4545:
--

LGTM. 

I'd refactor {{PluginStoreTestUtils.MiniDFSCluster 
prepareDFSCluster(MiniDFSCluster dfsCluster)}} to have the path setup for any 
FileSystem to be standalone, so code like mine, testing against other 
filesystems, can use the same setup code

> Allow YARN distributed shell to use ATS v1.5 APIs
> -
>
> Key: YARN-4545
> URL: https://issues.apache.org/jira/browse/YARN-4545
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-4545-YARN-4265.001.patch, 
> YARN-4545-trunk.001.patch, YARN-4545-trunk.002.patch, 
> YARN-4545-trunk.003.patch, YARN-4545-trunk.004.patch, 
> YARN-4545-trunk.005.patch, YARN-4545-trunk.006.patch
>
>
> We can use YARN distributed shell as a demo for the ATS v1.5 APIs. We need to 
> allow distributed shell post data with ATS v1.5 API if 1.5 is enabled in the 
> system. We also need to provide a sample plugin to read those data out. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4545) Allow YARN distributed shell to use ATS v1.5 APIs

2016-03-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15186779#comment-15186779
 ] 

Hadoop QA commented on YARN-4545:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 27s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 5s 
{color} | {color:red} root: patch generated 4 new + 47 unchanged - 0 fixed = 51 
total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 54s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_74. {color} |

[jira] [Commented] (YARN-4738) Notify the RM about the status of OPPORTUNISTIC containers

2016-03-09 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15186743#comment-15186743
 ] 

Arun Suresh commented on YARN-4738:
---

Thanks for the patch [~kkaranasos]. Took a first pass at it.. Couple of comments

* In line 565, It looks like the {{containerId}} is being removed from the 
map.. shouldnt it be the {{containerTokenId}} ?
* in line 608, w.r.t the TODO... yes, we should send the killed queued 
containers info during Node registration as well as re-registration.
* The {{TestNodeStatusUpdater}} should include a testcase verify this new 
information is sent and received  at the RM. 


> Notify the RM about the status of OPPORTUNISTIC containers
> --
>
> Key: YARN-4738
> URL: https://issues.apache.org/jira/browse/YARN-4738
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4738-yarn-2877.001.patch
>
>
> When an OPPORTUNISTIC container finishes its execution (either successfully 
> or because it failed/got killed), the RM needs to be notified.
> This way the AM also gets notified in turn about the successfully completed 
> tasks, as well as for rescheduling failed/killed tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)