[jira] [Commented] (YARN-6759) TestRMRestart.testRMRestartWaitForPreviousAMToFinish is failing in trunk

2017-07-13 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086888#comment-16086888
 ] 

Bibin A Chundatt commented on YARN-6759:


+1 Will commit it soon

> TestRMRestart.testRMRestartWaitForPreviousAMToFinish is failing in trunk
> 
>
> Key: YARN-6759
> URL: https://issues.apache.org/jira/browse/YARN-6759
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6759.001.patch
>
>
> {code}
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>   at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:273)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartWaitForPreviousAMToFinish(TestRMRestart.java:618)
> {code}
> refer 
> https://builds.apache.org/job/PreCommit-YARN-Build/16229/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testRMRestartWaitForPreviousAMToFinish/
>  which ran for YARN-2919



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086884#comment-16086884
 ] 

Hadoop QA commented on YARN-6733:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
33s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} YARN-5355 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase:
 The patch generated 0 new + 0 unchanged - 7 fixed = 0 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6733 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877226/YARN-6733-YARN-5355.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 134d863d1612 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 5791ced |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16439/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16439/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16439/console |
| Powered by | 

[jira] [Updated] (YARN-6733) Add table for storing sub-application entities

2017-07-13 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6733:
-
Attachment: YARN-6733-YARN-5355.004.patch

Attaching v004 to fix checkstyle comments. 

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-13 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086806#comment-16086806
 ] 

Sunil G commented on YARN-5146:
---

With this patch, I am seeing some errors in CapacityScheduler. I ll check and 
post error traces here.

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch, 
> YARN-5146.003.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6818) User limit per partition is not honored in branch-2.7 >=

2017-07-13 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086801#comment-16086801
 ] 

Jonathan Hung commented on YARN-6818:
-

Thanks [~sunilg] for the review! Added 002 patch to pass in NO_LABEL instead of 
null. Not sure if this was the cleanup in test case setup that you were 
referring to.

> User limit per partition is not honored in branch-2.7 >=
> 
>
> Key: YARN-6818
> URL: https://issues.apache.org/jira/browse/YARN-6818
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6818-branch-2.7.001.patch, 
> YARN-6818-branch-2.7.002.patch
>
>
> We are seeing an issue where user limit factor does not cap the amount of 
> resources a user can consume in a queue in a partition. Suppose you have a 
> queue with access to partition X, used resources in default partition is 0, 
> and used resources in partition X is at the partition's user limit. This is 
> the problematic code as far as I can tell: (in LeafQueue.java){noformat}
> if (Resources
> .greaterThan(resourceCalculator, clusterResource,
> user.getUsed(label),
> limit)) {
>   // if enabled, check to see if could we potentially use this node 
> instead
>   // of a reserved node if the application has reserved containers
>   if (this.reservationsContinueLooking) {
> if (Resources.lessThanOrEqual(
> resourceCalculator,
> clusterResource,
> Resources.subtract(user.getUsed(), 
> application.getCurrentReservation()),
> limit)) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit based on reservations - " + " consumed: 
> "
> + user.getUsed() + " reserved: "
> + application.getCurrentReservation() + " limit: " + limit);
>   }
>   Resource amountNeededToUnreserve = 
> Resources.subtract(user.getUsed(label), limit);
>   // we can only acquire a new container if we unreserve first since 
> we ignored the
>   // user limit. Choose the max of user limit or what was previously 
> set by max
>   // capacity.
>   
> currentResoureLimits.setAmountNeededUnreserve(Resources.max(resourceCalculator,
>   clusterResource, 
> currentResoureLimits.getAmountNeededUnreserve(),
>   amountNeededToUnreserve));
>   return true;
> }
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit - " + " consumed: "
> + user.getUsed() + " limit: " + limit);
>   }
>   return false;
> }
> {noformat}
> First it sees the used resources in partition X is greater than partition's 
> user limit. Then the reservation check also succeeds because it is checking 
> {{user.getUsed() - application.getCurrentReservation() <= limit}} and returns 
> true.
> One fix is to just set {{Resources.subtract(user.getUsed(), 
> application.getCurrentReservation())}} to 
> {{Resources.subtract(user.getUsed(label), 
> application.getCurrentReservation())}}.
> This doesn't seem to be a problem in branch-2.8 and higher since YARN-3356 
> introduces this check: {noformat}  if (this.reservationsContinueLooking 
> && checkReservations
>   && label.equals(CommonNodeLabelsManager.NO_LABEL)) {{noformat}
> so in this case getting the used resources in default partition seems to be 
> correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6818) User limit per partition is not honored in branch-2.7 >=

2017-07-13 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-6818:

Attachment: YARN-6818-branch-2.7.002.patch

> User limit per partition is not honored in branch-2.7 >=
> 
>
> Key: YARN-6818
> URL: https://issues.apache.org/jira/browse/YARN-6818
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6818-branch-2.7.001.patch, 
> YARN-6818-branch-2.7.002.patch
>
>
> We are seeing an issue where user limit factor does not cap the amount of 
> resources a user can consume in a queue in a partition. Suppose you have a 
> queue with access to partition X, used resources in default partition is 0, 
> and used resources in partition X is at the partition's user limit. This is 
> the problematic code as far as I can tell: (in LeafQueue.java){noformat}
> if (Resources
> .greaterThan(resourceCalculator, clusterResource,
> user.getUsed(label),
> limit)) {
>   // if enabled, check to see if could we potentially use this node 
> instead
>   // of a reserved node if the application has reserved containers
>   if (this.reservationsContinueLooking) {
> if (Resources.lessThanOrEqual(
> resourceCalculator,
> clusterResource,
> Resources.subtract(user.getUsed(), 
> application.getCurrentReservation()),
> limit)) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit based on reservations - " + " consumed: 
> "
> + user.getUsed() + " reserved: "
> + application.getCurrentReservation() + " limit: " + limit);
>   }
>   Resource amountNeededToUnreserve = 
> Resources.subtract(user.getUsed(label), limit);
>   // we can only acquire a new container if we unreserve first since 
> we ignored the
>   // user limit. Choose the max of user limit or what was previously 
> set by max
>   // capacity.
>   
> currentResoureLimits.setAmountNeededUnreserve(Resources.max(resourceCalculator,
>   clusterResource, 
> currentResoureLimits.getAmountNeededUnreserve(),
>   amountNeededToUnreserve));
>   return true;
> }
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit - " + " consumed: "
> + user.getUsed() + " limit: " + limit);
>   }
>   return false;
> }
> {noformat}
> First it sees the used resources in partition X is greater than partition's 
> user limit. Then the reservation check also succeeds because it is checking 
> {{user.getUsed() - application.getCurrentReservation() <= limit}} and returns 
> true.
> One fix is to just set {{Resources.subtract(user.getUsed(), 
> application.getCurrentReservation())}} to 
> {{Resources.subtract(user.getUsed(label), 
> application.getCurrentReservation())}}.
> This doesn't seem to be a problem in branch-2.8 and higher since YARN-3356 
> introduces this check: {noformat}  if (this.reservationsContinueLooking 
> && checkReservations
>   && label.equals(CommonNodeLabelsManager.NO_LABEL)) {{noformat}
> so in this case getting the used resources in default partition seems to be 
> correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-07-13 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086786#comment-16086786
 ] 

Sunil G commented on YARN-4161:
---

[~ywskycn]  could u please help to rebase the patch

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5947) Create LeveldbConfigurationStore class using Leveldb as backing store

2017-07-13 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086781#comment-16086781
 ] 

Jonathan Hung commented on YARN-5947:
-

003 patch to fix misc style issues

> Create LeveldbConfigurationStore class using Leveldb as backing store
> -
>
> Key: YARN-5947
> URL: https://issues.apache.org/jira/browse/YARN-5947
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5947.001.patch, YARN-5947-YARN-5734.001.patch, 
> YARN-5947-YARN-5734.002.patch, YARN-5947-YARN-5734.003.patch
>
>
> LeveldbConfigurationStore will extend YarnConfigurationStore for storing 
> scheduler configuration in a LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5947) Create LeveldbConfigurationStore class using Leveldb as backing store

2017-07-13 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-5947:

Attachment: YARN-5947-YARN-5734.003.patch

> Create LeveldbConfigurationStore class using Leveldb as backing store
> -
>
> Key: YARN-5947
> URL: https://issues.apache.org/jira/browse/YARN-5947
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-5947.001.patch, YARN-5947-YARN-5734.001.patch, 
> YARN-5947-YARN-5734.002.patch, YARN-5947-YARN-5734.003.patch
>
>
> LeveldbConfigurationStore will extend YarnConfigurationStore for storing 
> scheduler configuration in a LevelDB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086778#comment-16086778
 ] 

Hadoop QA commented on YARN-4161:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-4161 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4161 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875321/YARN-4161.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16436/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6818) User limit per partition is not honored in branch-2.7 >=

2017-07-13 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086772#comment-16086772
 ] 

Sunil G commented on YARN-6818:
---

Thanks [~jhung]. Good catch! A quick comment in test case

Its better to pass {{CommonNodeLabelsManager.NO_LABEL}} or a final string with 
"" as value instead of null in case of no-label scenario. In 
{{CSQueueUtils.loadCapacitiesByLabelsFromConf}}, I think its been done with an 
if..else case. It may be better this way for a clean way of setting up test 
case.

> User limit per partition is not honored in branch-2.7 >=
> 
>
> Key: YARN-6818
> URL: https://issues.apache.org/jira/browse/YARN-6818
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6818-branch-2.7.001.patch
>
>
> We are seeing an issue where user limit factor does not cap the amount of 
> resources a user can consume in a queue in a partition. Suppose you have a 
> queue with access to partition X, used resources in default partition is 0, 
> and used resources in partition X is at the partition's user limit. This is 
> the problematic code as far as I can tell: (in LeafQueue.java){noformat}
> if (Resources
> .greaterThan(resourceCalculator, clusterResource,
> user.getUsed(label),
> limit)) {
>   // if enabled, check to see if could we potentially use this node 
> instead
>   // of a reserved node if the application has reserved containers
>   if (this.reservationsContinueLooking) {
> if (Resources.lessThanOrEqual(
> resourceCalculator,
> clusterResource,
> Resources.subtract(user.getUsed(), 
> application.getCurrentReservation()),
> limit)) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit based on reservations - " + " consumed: 
> "
> + user.getUsed() + " reserved: "
> + application.getCurrentReservation() + " limit: " + limit);
>   }
>   Resource amountNeededToUnreserve = 
> Resources.subtract(user.getUsed(label), limit);
>   // we can only acquire a new container if we unreserve first since 
> we ignored the
>   // user limit. Choose the max of user limit or what was previously 
> set by max
>   // capacity.
>   
> currentResoureLimits.setAmountNeededUnreserve(Resources.max(resourceCalculator,
>   clusterResource, 
> currentResoureLimits.getAmountNeededUnreserve(),
>   amountNeededToUnreserve));
>   return true;
> }
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit - " + " consumed: "
> + user.getUsed() + " limit: " + limit);
>   }
>   return false;
> }
> {noformat}
> First it sees the used resources in partition X is greater than partition's 
> user limit. Then the reservation check also succeeds because it is checking 
> {{user.getUsed() - application.getCurrentReservation() <= limit}} and returns 
> true.
> One fix is to just set {{Resources.subtract(user.getUsed(), 
> application.getCurrentReservation())}} to 
> {{Resources.subtract(user.getUsed(label), 
> application.getCurrentReservation())}}.
> This doesn't seem to be a problem in branch-2.8 and higher since YARN-3356 
> introduces this check: {noformat}  if (this.reservationsContinueLooking 
> && checkReservations
>   && label.equals(CommonNodeLabelsManager.NO_LABEL)) {{noformat}
> so in this case getting the used resources in default partition seems to be 
> correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6792) Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086769#comment-16086769
 ] 

Hudson commented on YARN-6792:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12005 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12005/])
YARN-6792. Incorrect XML convertion in NodeIDsInfo and (sunilg: rev 
228ddaa31d812533b862576445494bc2cd8a2884)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodeIDsInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/LabelsToNodesInfo.java


> Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo
> -
>
> Key: YARN-6792
> URL: https://issues.apache.org/jira/browse/YARN-6792
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6792.v1.patch, YARN-6792.v2.patch
>
>
> NodeIDsInfo contains a typo and there is a missing constructor in 
> LabelsToNodesInfo. These bugs does not allow a correct conversation in XML of 
>  LabelsToNodesInfo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6714) IllegalStateException while handling APP_ATTEMPT_REMOVED event when async-scheduling enabled in CapacityScheduler

2017-07-13 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086766#comment-16086766
 ] 

Sunil G commented on YARN-6714:
---

[~Tao Yang] Could you please help to check the warnings.

> IllegalStateException while handling APP_ATTEMPT_REMOVED event when 
> async-scheduling enabled in CapacityScheduler
> -
>
> Key: YARN-6714
> URL: https://issues.apache.org/jira/browse/YARN-6714
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6714.001.patch, YARN-6714.002.patch, 
> YARN-6714.003.patch, YARN-6714.branch-2.003.patch, 
> YARN-6714.branch-2.004.patch
>
>
> Currently in async-scheduling mode of CapacityScheduler, after AM failover 
> and unreserve all reserved containers, it still have chance to get and commit 
> the outdated reserve proposal of the failed app attempt. This problem 
> happened on an app in our cluster, when this app stopped, it unreserved all 
> reserved containers and compared these appAttemptId with current 
> appAttemptId, if not match it will throw IllegalStateException and make RM 
> crashed.
> Error log:
> {noformat}
> 2017-06-08 11:02:24,339 FATAL [ResourceManager Event Processor] 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ATTEMPT_REMOVED to the scheduler
> java.lang.IllegalStateException: Trying to unreserve  for application 
> appattempt_1495188831758_0121_02 when currently reserved  for application 
> application_1495188831758_0121 on node host: node1:45454 #containers=2 
> available=... used=...
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode.unreserveResource(FiCaSchedulerNode.java:123)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.unreserve(FiCaSchedulerApp.java:845)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.completedContainer(LeafQueue.java:1787)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.completedContainerInternal(CapacityScheduler.java:1957)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.completedContainer(AbstractYarnScheduler.java:586)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.doneApplicationAttempt(CapacityScheduler.java:966)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1740)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:152)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:822)
> at java.lang.Thread.run(Thread.java:834)
> {noformat}
> When async-scheduling enabled, CapacityScheduler#doneApplicationAttempt and 
> CapacityScheduler#tryCommit both need to get write_lock before executing, so 
> we can check the app attempt state in commit process to avoid committing 
> outdated proposals.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6280) Introduce deselect query param to skip ResourceRequest from getApp/getApps REST API

2017-07-13 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086764#comment-16086764
 ] 

Sunil G commented on YARN-6280:
---

[~cltlfcjin]

Couple of comments in branch-2 patch. (Sorry for late findings)
# I could see the addition of DeSelectFields in {{getApps}} api for branch-2. 
But same change is not visible for {{getApp}} api. Same is available in trunk.
# In {{ResourceManagerRest.md}},  I could see an explanation for deSelects as 
below in trunk.
{noformat}
* deSelects - a generic fields which will be skipped in the result.
{noformat}

But same is missing in branch-2. Please help to check the same.

> Introduce deselect query param to skip ResourceRequest from getApp/getApps 
> REST API
> ---
>
> Key: YARN-6280
> URL: https://issues.apache.org/jira/browse/YARN-6280
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager, restapi
>Affects Versions: 2.7.3
>Reporter: Lantao Jin
>Assignee: Lantao Jin
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6280.001.patch, YARN-6280.002.patch, 
> YARN-6280.003.patch, YARN-6280.004.patch, YARN-6280.005.patch, 
> YARN-6280.006.patch, YARN-6280.007.patch, YARN-6280.008.patch, 
> YARN-6280.009.patch, YARN-6280.010.patch, YARN-6280.011.patch, 
> YARN-6280-branch-2.001.patch
>
>
> Begin from v2.7, the ResourceManager Cluster Applications REST API returns   
> ResourceRequest list. It's a very large construction in AppInfo.
> As a test, we use below URI to query only 2 results:
> http:// address:port>/ws/v1/cluster/apps?states=running,accepted=2
> The results are very different:
> ||Hadoop version|Total Character|Total Word|Total Lines|Size||
> |2.4.1|1192|  42| 42| 1.2 KB|
> |2.7.1|1222179|   48740|  48735|  1.21 MB|
> Most RESTful API requesters don't know about this after upgraded and their 
> old queries may cause ResourceManager more GC consuming and slower. Even if 
> they know this but have no idea to reduce the impact of ResourceManager 
> except slow down their query frequency.
> The patch adding a query parameter "showResourceRequests" to help requesters 
> who don't need this information to reduce the overhead. In consideration of 
> compatibility of interface, the default value is true if they don't set the 
> parameter, so the behaviour is the same as now.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086758#comment-16086758
 ] 

Hadoop QA commented on YARN-6733:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
53s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} YARN-5355 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase:
 The patch generated 2 new + 0 unchanged - 7 fixed = 2 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-6733 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877212/YARN-6733-YARN-5355.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 92b6395b6a94 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 5791ced |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16435/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16435/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16435/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-6720) Support updating FPGA related constraint node label after FPGA device re-configuration

2017-07-13 Thread Zhankun Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086747#comment-16086747
 ] 

Zhankun Tang commented on YARN-6720:


[~Naganarasimha], sorry for the late reply.

Yeah. So far, I can only think of several new attributes for GPU/FPGA resource 
handler to use. Maybe it's fine that we defined some constant for GPU/FPGA 
first and improve it if this hard-code is not flexible?

> Support updating FPGA related constraint node label after FPGA device 
> re-configuration
> --
>
> Key: YARN-6720
> URL: https://issues.apache.org/jira/browse/YARN-6720
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
> Attachments: 
> Storing-and-Updating-extra-FPGA-resource-attributes-in-hdfs_v1.pdf
>
>
> In order to provide a global optimal scheduling for mutable FPGA resource, it 
> seems an easy and direct way to utilize constraint node labels(YARN-3409) 
> instead of extending the global scheduler(YARN-3926) to match both resource 
> count and attributes.
> The rough idea is that the AM sets the constraint node label expression to 
> request containers on the nodes whose FPGA devices has the matching IP, and 
> then NM resource handler update the node constraint label if there's FPGA 
> device re-configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6792) Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo

2017-07-13 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086744#comment-16086744
 ] 

Sunil G commented on YARN-6792:
---

+1. Thanks [~subru]. Yes, I will help to commit the same now.

> Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo
> -
>
> Key: YARN-6792
> URL: https://issues.apache.org/jira/browse/YARN-6792
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Blocker
> Attachments: YARN-6792.v1.patch, YARN-6792.v2.patch
>
>
> NodeIDsInfo contains a typo and there is a missing constructor in 
> LabelsToNodesInfo. These bugs does not allow a correct conversation in XML of 
>  LabelsToNodesInfo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6733) Add table for storing sub-application entities

2017-07-13 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6733:
-
Attachment: YARN-6733-YARN-5355.003.patch

Uploading v003 with the actual user id at the end of the row key. This user id 
is the user who runs the AM. The sub App user is the doAs user. 

Row key now is

{code}
subAppUserId ! clusterId ! entity type! entity prefix! entity id! user id
{code}

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5953) Create CLI for changing YARN configurations

2017-07-13 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086729#comment-16086729
 ] 

Jonathan Hung commented on YARN-5953:
-

BTW, it seems some of the files which moved locations in the patch did not get 
moved in the commit (e.g. QueueConfigInfo.java moved to o/a/h/y/webapp/dao in 
the patch but wasn't moved in the commit). I force-pushed a new commit. Just 
FYI.

> Create CLI for changing YARN configurations
> ---
>
> Key: YARN-5953
> URL: https://issues.apache.org/jira/browse/YARN-5953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: YARN-5734
>
> Attachments: YARN-5953-YARN-5734.001.patch, 
> YARN-5953-YARN-5734.002.patch, YARN-5953-YARN-5734.003.patch
>
>
> Based on the design in YARN-5734.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6821) Move FederationStateStore SQL DDL files from test resource to sbin

2017-07-13 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086721#comment-16086721
 ] 

Carlo Curino commented on YARN-6821:


+1

> Move FederationStateStore SQL DDL files from test resource to sbin
> --
>
> Key: YARN-6821
> URL: https://issues.apache.org/jira/browse/YARN-6821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6821-YARN-2915-v1.patch, 
> YARN-6821-YARN-2915-v2.patch
>
>
> The FederationStateStore SQL DDL files are currently in _src/test/resources_ 
> as there's no compile time dependency. This jira proposes to move them to 
> _bin_ to ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6815) [Bug] FederationStateStoreFacade return behavior should be consistent irrespective of whether caching is enabled or not

2017-07-13 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086720#comment-16086720
 ] 

Carlo Curino commented on YARN-6815:


+1

> [Bug] FederationStateStoreFacade return behavior should be consistent 
> irrespective of whether caching is enabled or not
> ---
>
> Key: YARN-6815
> URL: https://issues.apache.org/jira/browse/YARN-6815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6815-YARN-2915.v1.patch, 
> YARN-6815-YARN-2915.v2.patch, YARN-6815-YARN-2915.v3.patch
>
>
> {{FederationStateStoreFacade::getSubCluster/getPolicyConfiguration}} returns 
> null if caching is enabled or throws YarnException if caching is disabled if 
> the queried entity is absent. This jira proposes to make the return 
> consistent to ensure correctness of clients like {{RouterPolicyFacade}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6821) Move FederationStateStore SQL DDL files from test resource to sbin

2017-07-13 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086718#comment-16086718
 ] 

Subru Krishnan commented on YARN-6821:
--

The test case failures are unrelated.

> Move FederationStateStore SQL DDL files from test resource to sbin
> --
>
> Key: YARN-6821
> URL: https://issues.apache.org/jira/browse/YARN-6821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6821-YARN-2915-v1.patch, 
> YARN-6821-YARN-2915-v2.patch
>
>
> The FederationStateStore SQL DDL files are currently in _src/test/resources_ 
> as there's no compile time dependency. This jira proposes to move them to 
> _bin_ to ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6792) Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo

2017-07-13 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086707#comment-16086707
 ] 

Subru Krishnan commented on YARN-6792:
--

[~sunilg], Thanks for reviewing. Will you be committing or I can so that 
[~giovanni.fumarola] can be unblocked for YARN-5412.

> Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo
> -
>
> Key: YARN-6792
> URL: https://issues.apache.org/jira/browse/YARN-6792
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Blocker
> Attachments: YARN-6792.v1.patch, YARN-6792.v2.patch
>
>
> NodeIDsInfo contains a typo and there is a missing constructor in 
> LabelsToNodesInfo. These bugs does not allow a correct conversation in XML of 
>  LabelsToNodesInfo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6818) User limit per partition is not honored in branch-2.7 >=

2017-07-13 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086662#comment-16086662
 ] 

Jonathan Hung commented on YARN-6818:
-

Hi, [~Naganarasimha], attached a patch for branch-2.7.

> User limit per partition is not honored in branch-2.7 >=
> 
>
> Key: YARN-6818
> URL: https://issues.apache.org/jira/browse/YARN-6818
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6818-branch-2.7.001.patch
>
>
> We are seeing an issue where user limit factor does not cap the amount of 
> resources a user can consume in a queue in a partition. Suppose you have a 
> queue with access to partition X, used resources in default partition is 0, 
> and used resources in partition X is at the partition's user limit. This is 
> the problematic code as far as I can tell: (in LeafQueue.java){noformat}
> if (Resources
> .greaterThan(resourceCalculator, clusterResource,
> user.getUsed(label),
> limit)) {
>   // if enabled, check to see if could we potentially use this node 
> instead
>   // of a reserved node if the application has reserved containers
>   if (this.reservationsContinueLooking) {
> if (Resources.lessThanOrEqual(
> resourceCalculator,
> clusterResource,
> Resources.subtract(user.getUsed(), 
> application.getCurrentReservation()),
> limit)) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit based on reservations - " + " consumed: 
> "
> + user.getUsed() + " reserved: "
> + application.getCurrentReservation() + " limit: " + limit);
>   }
>   Resource amountNeededToUnreserve = 
> Resources.subtract(user.getUsed(label), limit);
>   // we can only acquire a new container if we unreserve first since 
> we ignored the
>   // user limit. Choose the max of user limit or what was previously 
> set by max
>   // capacity.
>   
> currentResoureLimits.setAmountNeededUnreserve(Resources.max(resourceCalculator,
>   clusterResource, 
> currentResoureLimits.getAmountNeededUnreserve(),
>   amountNeededToUnreserve));
>   return true;
> }
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit - " + " consumed: "
> + user.getUsed() + " limit: " + limit);
>   }
>   return false;
> }
> {noformat}
> First it sees the used resources in partition X is greater than partition's 
> user limit. Then the reservation check also succeeds because it is checking 
> {{user.getUsed() - application.getCurrentReservation() <= limit}} and returns 
> true.
> One fix is to just set {{Resources.subtract(user.getUsed(), 
> application.getCurrentReservation())}} to 
> {{Resources.subtract(user.getUsed(label), 
> application.getCurrentReservation())}}.
> This doesn't seem to be a problem in branch-2.8 and higher since YARN-3356 
> introduces this check: {noformat}  if (this.reservationsContinueLooking 
> && checkReservations
>   && label.equals(CommonNodeLabelsManager.NO_LABEL)) {{noformat}
> so in this case getting the used resources in default partition seems to be 
> correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6818) User limit per partition is not honored in branch-2.7 >=

2017-07-13 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-6818:

Attachment: YARN-6818-branch-2.7.001.patch

> User limit per partition is not honored in branch-2.7 >=
> 
>
> Key: YARN-6818
> URL: https://issues.apache.org/jira/browse/YARN-6818
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Attachments: YARN-6818-branch-2.7.001.patch
>
>
> We are seeing an issue where user limit factor does not cap the amount of 
> resources a user can consume in a queue in a partition. Suppose you have a 
> queue with access to partition X, used resources in default partition is 0, 
> and used resources in partition X is at the partition's user limit. This is 
> the problematic code as far as I can tell: (in LeafQueue.java){noformat}
> if (Resources
> .greaterThan(resourceCalculator, clusterResource,
> user.getUsed(label),
> limit)) {
>   // if enabled, check to see if could we potentially use this node 
> instead
>   // of a reserved node if the application has reserved containers
>   if (this.reservationsContinueLooking) {
> if (Resources.lessThanOrEqual(
> resourceCalculator,
> clusterResource,
> Resources.subtract(user.getUsed(), 
> application.getCurrentReservation()),
> limit)) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit based on reservations - " + " consumed: 
> "
> + user.getUsed() + " reserved: "
> + application.getCurrentReservation() + " limit: " + limit);
>   }
>   Resource amountNeededToUnreserve = 
> Resources.subtract(user.getUsed(label), limit);
>   // we can only acquire a new container if we unreserve first since 
> we ignored the
>   // user limit. Choose the max of user limit or what was previously 
> set by max
>   // capacity.
>   
> currentResoureLimits.setAmountNeededUnreserve(Resources.max(resourceCalculator,
>   clusterResource, 
> currentResoureLimits.getAmountNeededUnreserve(),
>   amountNeededToUnreserve));
>   return true;
> }
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit - " + " consumed: "
> + user.getUsed() + " limit: " + limit);
>   }
>   return false;
> }
> {noformat}
> First it sees the used resources in partition X is greater than partition's 
> user limit. Then the reservation check also succeeds because it is checking 
> {{user.getUsed() - application.getCurrentReservation() <= limit}} and returns 
> true.
> One fix is to just set {{Resources.subtract(user.getUsed(), 
> application.getCurrentReservation())}} to 
> {{Resources.subtract(user.getUsed(label), 
> application.getCurrentReservation())}}.
> This doesn't seem to be a problem in branch-2.8 and higher since YARN-3356 
> introduces this check: {noformat}  if (this.reservationsContinueLooking 
> && checkReservations
>   && label.equals(CommonNodeLabelsManager.NO_LABEL)) {{noformat}
> so in this case getting the used resources in default partition seems to be 
> correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6807) Adding required missing configs to Federation configuration guide based on e2e testing

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086661#comment-16086661
 ] 

Hadoop QA commented on YARN-6807:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
12s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877199/YARN-6807-YARN-2915-v1.4.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 9fd808160783 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 590d959 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16433/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Adding required missing configs to Federation configuration guide based on 
> e2e testing
> --
>
> Key: YARN-6807
> URL: https://issues.apache.org/jira/browse/YARN-6807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, federation
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Tanuj Nayak
> Attachments: YARN-6807-YARN-2915-v1.2.patch, 
> YARN-6807-YARN-2915-v1.3.patch, YARN-6807-YARN-2915-v1.4.patch, 
> YARN-6807-YARN-2915-v1.patch
>
>
> We identified some missing configs that are required for e2e run. This JIRA 
> proposes to update the documentation to include the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6807) Adding required missing configs to Federation configuration guide based on e2e testing

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086656#comment-16086656
 ] 

Hadoop QA commented on YARN-6807:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
25s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877198/YARN-6807-YARN-2915-v1.3.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux b3c4f4f939b0 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 590d959 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/16432/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16432/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Adding required missing configs to Federation configuration guide based on 
> e2e testing
> --
>
> Key: YARN-6807
> URL: https://issues.apache.org/jira/browse/YARN-6807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, federation
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Tanuj Nayak
> Attachments: YARN-6807-YARN-2915-v1.2.patch, 
> YARN-6807-YARN-2915-v1.3.patch, YARN-6807-YARN-2915-v1.4.patch, 
> YARN-6807-YARN-2915-v1.patch
>
>
> We identified some missing configs that are required for e2e run. This JIRA 
> proposes to update the documentation to include the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6821) Move FederationStateStore SQL DDL files from test resource to sbin

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086655#comment-16086655
 ] 

Hadoop QA commented on YARN-6821:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
48s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
11s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
3s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 25s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877178/YARN-6821-YARN-2915-v2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 8180ebfed967 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 590d959 |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16430/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16430/testReport/ |
| modules | C: hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
. |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16430/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Move FederationStateStore SQL DDL files from test resource to sbin
> 

[jira] [Updated] (YARN-6807) Adding required missing configs to Federation configuration guide based on e2e testing

2017-07-13 Thread Tanuj Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanuj Nayak updated YARN-6807:
--
Attachment: YARN-6807-YARN-2915-v1.4.patch

> Adding required missing configs to Federation configuration guide based on 
> e2e testing
> --
>
> Key: YARN-6807
> URL: https://issues.apache.org/jira/browse/YARN-6807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, federation
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Tanuj Nayak
> Attachments: YARN-6807-YARN-2915-v1.2.patch, 
> YARN-6807-YARN-2915-v1.3.patch, YARN-6807-YARN-2915-v1.4.patch, 
> YARN-6807-YARN-2915-v1.patch
>
>
> We identified some missing configs that are required for e2e run. This JIRA 
> proposes to update the documentation to include the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6807) Adding required missing configs to Federation configuration guide based on e2e testing

2017-07-13 Thread Tanuj Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanuj Nayak updated YARN-6807:
--
Attachment: YARN-6807-YARN-2915-v1.3.patch

> Adding required missing configs to Federation configuration guide based on 
> e2e testing
> --
>
> Key: YARN-6807
> URL: https://issues.apache.org/jira/browse/YARN-6807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, federation
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Tanuj Nayak
> Attachments: YARN-6807-YARN-2915-v1.2.patch, 
> YARN-6807-YARN-2915-v1.3.patch, YARN-6807-YARN-2915-v1.patch
>
>
> We identified some missing configs that are required for e2e run. This JIRA 
> proposes to update the documentation to include the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6807) Adding required missing configs to Federation configuration guide based on e2e testing

2017-07-13 Thread Tanuj Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanuj Nayak updated YARN-6807:
--
Attachment: (was: YARN-6807-YARN-2915-v1.3.patch)

> Adding required missing configs to Federation configuration guide based on 
> e2e testing
> --
>
> Key: YARN-6807
> URL: https://issues.apache.org/jira/browse/YARN-6807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, federation
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Tanuj Nayak
> Attachments: YARN-6807-YARN-2915-v1.2.patch, 
> YARN-6807-YARN-2915-v1.patch
>
>
> We identified some missing configs that are required for e2e run. This JIRA 
> proposes to update the documentation to include the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6807) Adding required missing configs to Federation configuration guide based on e2e testing

2017-07-13 Thread Tanuj Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanuj Nayak updated YARN-6807:
--
Attachment: YARN-6807-YARN-2915-v1.3.patch

> Adding required missing configs to Federation configuration guide based on 
> e2e testing
> --
>
> Key: YARN-6807
> URL: https://issues.apache.org/jira/browse/YARN-6807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, federation
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Tanuj Nayak
> Attachments: YARN-6807-YARN-2915-v1.2.patch, 
> YARN-6807-YARN-2915-v1.patch
>
>
> We identified some missing configs that are required for e2e run. This JIRA 
> proposes to update the documentation to include the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-13 Thread daemon (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086624#comment-16086624
 ] 

daemon commented on YARN-6769:
--

[~yufeigu], Thanks yufei. I really name is zhouyunfan. 
Thank you so mush for doing so mush for me!

> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Assignee: daemon
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: YARN-6769.001.patch, YARN-6769.002.patch, 
> YARN-6769.003.patch, YARN-6769.004.patch
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5412) Create a proxy chain for ResourceManager REST API in the Router

2017-07-13 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086592#comment-16086592
 ] 

Giovanni Matteo Fumarola commented on YARN-5412:


By introducing {{capacity-scheduler.xml}} Jerkins is able to execute the new 
test class {{TestRouterWebServicesREST}}.
As soon as YARN-6792 is checked in, I will submit a new patch that will address 
all the remaining Yetus warnings and Carlo's feedback. 

> Create a proxy chain for ResourceManager REST API in the Router
> ---
>
> Key: YARN-5412
> URL: https://issues.apache.org/jira/browse/YARN-5412
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5412-YARN-2915.1.patch, 
> YARN-5412-YARN-2915.2.patch, YARN-5412-YARN-2915.3.patch, 
> YARN-5412-YARN-2915.proto.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ResourceManager REST API in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6805) NPE in LinuxContainerExecutor due to null PrivilegedOperationException exit code

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086577#comment-16086577
 ] 

Hudson commented on YARN-6805:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12003 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12003/])
YARN-6805. NPE in LinuxContainerExecutor due to null (jlowe: rev 
f76f5c0919cdb0b032edb309d137093952e77268)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperationException.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerExecutionException.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutorWithMocks.java
Revert "YARN-6805. NPE in LinuxContainerExecutor due to null (jlowe: rev 
0ffca5d347df0acb1979dff7a07ae88ea834adc7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerExecutionException.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutorWithMocks.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperationException.java
YARN-6805. NPE in LinuxContainerExecutor due to null (jlowe: rev 
ebc048cc055d0f7d1b85bc0b6f56cd15673e837d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperationException.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutorWithMocks.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerExecutionException.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java


> NPE in LinuxContainerExecutor due to null PrivilegedOperationException exit 
> code
> 
>
> Key: YARN-6805
> URL: https://issues.apache.org/jira/browse/YARN-6805
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-6805.001.patch
>
>
> The LinuxContainerExecutor contains a number of code snippets like this:
> {code}
> } catch (PrivilegedOperationException e) {
>   int exitCode = e.getExitCode();
> {code}
> PrivilegedOperationException#getExitCode can return null if the operation was 
> interrupted, so when the JVM does auto-unboxing on that last line it can NPE 
> if there was no exit code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6815) [Bug] FederationStateStoreFacade return behavior should be consistent irrespective of whether caching is enabled or not

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086575#comment-16086575
 ] 

Hadoop QA commented on YARN-6815:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
18s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6815 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877182/YARN-6815-YARN-2915.v3.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 671a70596e10 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 590d959 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16431/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16431/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Bug] FederationStateStoreFacade return behavior should be consistent 
> irrespective of whether caching is enabled or not
> ---
>
> Key: YARN-6815
> URL: https://issues.apache.org/jira/browse/YARN-6815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> 

[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-13 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086576#comment-16086576
 ] 

Vrushali C commented on YARN-6733:
--

hey [~rohithsharma]

So I remembered why we wanted to put in the actual user name at the end of the 
row key (not the sub app user but the actual user who is running the app 
master). We wanted to do that to protect against some sub-app user who may be 
compromised and decides to go on a writing spree and overwrites other users' 
data. So we were going to add in the AM user at the end so that we know which 
user is writing this. 

I will update the patch shortly. 

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3260) AM attempt fail to register before RM processes launch event

2017-07-13 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086574#comment-16086574
 ] 

Jason Lowe commented on YARN-3260:
--

Thanks for the patch!  +1 lgtm.  I'll commit this tomorrow if there are no 
objections.

> AM attempt fail to register before RM processes launch event
> 
>
> Key: YARN-3260
> URL: https://issues.apache.org/jira/browse/YARN-3260
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-3260.001.patch
>
>
> The RM on one of our clusters was running behind on processing 
> AsyncDispatcher events, and this caused AMs to fail to register due to an 
> NPE.  The AM was launched and attempting to register before the 
> RMAppAttemptImpl had processed the LAUNCHED event, and the client to AM token 
> had not been generated yet.  The NPE occurred because the 
> ApplicationMasterService tried to encode the missing token.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6805) NPE in LinuxContainerExecutor due to null PrivilegedOperationException exit code

2017-07-13 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-6805:
-
Fix Version/s: (was: 2.9)
   2.9.0

> NPE in LinuxContainerExecutor due to null PrivilegedOperationException exit 
> code
> 
>
> Key: YARN-6805
> URL: https://issues.apache.org/jira/browse/YARN-6805
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-6805.001.patch
>
>
> The LinuxContainerExecutor contains a number of code snippets like this:
> {code}
> } catch (PrivilegedOperationException e) {
>   int exitCode = e.getExitCode();
> {code}
> PrivilegedOperationException#getExitCode can return null if the operation was 
> interrupted, so when the JVM does auto-unboxing on that last line it can NPE 
> if there was no exit code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5412) Create a proxy chain for ResourceManager REST API in the Router

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086556#comment-16086556
 ] 

Hadoop QA commented on YARN-5412:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
21s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
40s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 217 unchanged - 0 fixed = 221 total (was 217) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 11 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 15 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} 
|
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 59s{color} 
| {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.router.webapp.TestRouterWebServicesREST |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (YARN-6654) RollingLevelDBTimelineStore backwards incompatible after fst upgrade

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086554#comment-16086554
 ] 

Hudson commented on YARN-6654:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12002 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12002/])
YARN-6654. RollingLevelDBTimelineStore backwards incompatible after fst (jlowe: 
rev 5f1ee72b0ebf0330417b7c0115083bc851923be4)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java


> RollingLevelDBTimelineStore backwards incompatible after fst upgrade
> 
>
> Key: YARN-6654
> URL: https://issues.apache.org/jira/browse/YARN-6654
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.2
>
> Attachments: YARN-6654.1.patch, YARN-6654.2.patch, YARN-6654.3.patch
>
>
> There is a small minor backwards compatible change introduced while upgrading 
> fst library from 2.24 to 2.50.
> {code}
> Exception in thread "main" java.io.IOException: java.lang.RuntimeException: 
> unable to find class for code 83
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:243)
>   at 
> org.nustaq.serialization.FSTConfiguration.asObject(FSTConfiguration.java:1125)
>   at org.nustaq.serialization.FSTNoJackson.main(FSTNoJackson.java:31)
> Caused by: java.lang.RuntimeException: unable to find class for code 83
>   at 
> org.nustaq.serialization.FSTClazzNameRegistry.decodeClass(FSTClazzNameRegistry.java:180)
>   at 
> org.nustaq.serialization.coders.FSTStreamDecoder.readClass(FSTStreamDecoder.java:472)
>   at 
> org.nustaq.serialization.FSTObjectInput.readClass(FSTObjectInput.java:933)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:343)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.serializers.FSTArrayListSerializer.instantiate(FSTArrayListSerializer.java:63)
>   at 
> org.nustaq.serialization.FSTObjectInput.instantiateAndReadWithSer(FSTObjectInput.java:497)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:366)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.serializers.FSTMapSerializer.instantiate(FSTMapSerializer.java:78)
>   at 
> org.nustaq.serialization.FSTObjectInput.instantiateAndReadWithSer(FSTObjectInput.java:497)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:366)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:307)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:241)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6815) [Bug] FederationStateStoreFacade return behavior should be consistent irrespective of whether caching is enabled or not

2017-07-13 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6815:
-
Attachment: YARN-6815-YARN-2915.v3.patch

Thanks [~curino] for reviewing the patch. I updated the patch to address your 
comments.

> [Bug] FederationStateStoreFacade return behavior should be consistent 
> irrespective of whether caching is enabled or not
> ---
>
> Key: YARN-6815
> URL: https://issues.apache.org/jira/browse/YARN-6815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6815-YARN-2915.v1.patch, 
> YARN-6815-YARN-2915.v2.patch, YARN-6815-YARN-2915.v3.patch
>
>
> {{FederationStateStoreFacade::getSubCluster/getPolicyConfiguration}} returns 
> null if caching is enabled or throws YarnException if caching is disabled if 
> the queried entity is absent. This jira proposes to make the return 
> consistent to ensure correctness of clients like {{RouterPolicyFacade}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6805) NPE in LinuxContainerExecutor due to null PrivilegedOperationException exit code

2017-07-13 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086541#comment-16086541
 ] 

Jason Lowe commented on YARN-6805:
--

Thanks for the reviews!  I'll fix the whitespace nit on the commit.

> NPE in LinuxContainerExecutor due to null PrivilegedOperationException exit 
> code
> 
>
> Key: YARN-6805
> URL: https://issues.apache.org/jira/browse/YARN-6805
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: YARN-6805.001.patch
>
>
> The LinuxContainerExecutor contains a number of code snippets like this:
> {code}
> } catch (PrivilegedOperationException e) {
>   int exitCode = e.getExitCode();
> {code}
> PrivilegedOperationException#getExitCode can return null if the operation was 
> interrupted, so when the JVM does auto-unboxing on that last line it can NPE 
> if there was no exit code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6821) Move FederationStateStore SQL DDL files from test resource to sbin

2017-07-13 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6821:
-
Attachment: YARN-6821-YARN-2915-v2.patch

[~curino], what you say makes sense. Updating patch to move the DDL SQL files 
to bin folder.

> Move FederationStateStore SQL DDL files from test resource to sbin
> --
>
> Key: YARN-6821
> URL: https://issues.apache.org/jira/browse/YARN-6821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6821-YARN-2915-v1.patch, 
> YARN-6821-YARN-2915-v2.patch
>
>
> The FederationStateStore SQL DDL files are currently in _src/test/resources_ 
> as there's no compile time dependency. This jira proposes to move them to 
> _bin_ to ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6821) Move FederationStateStore SQL DDL files from test resource to sbin

2017-07-13 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6821:
-
Description: The FederationStateStore SQL DDL files are currently in 
_src/test/resources_ as there's no compile time dependency. This jira proposes 
to move them to _bin_ to ensure they are part of the distro.  (was: The 
FederationStateStore SQL DDL files are currently in _src/test_ as there's no 
compile time dependency. This jira proposes to move them to _src/main_ to 
ensure they are part of the distro.)

> Move FederationStateStore SQL DDL files from test resource to sbin
> --
>
> Key: YARN-6821
> URL: https://issues.apache.org/jira/browse/YARN-6821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6821-YARN-2915-v1.patch, 
> YARN-6821-YARN-2915-v2.patch
>
>
> The FederationStateStore SQL DDL files are currently in _src/test/resources_ 
> as there's no compile time dependency. This jira proposes to move them to 
> _bin_ to ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6821) Move FederationStateStore SQL DDL files from test resource to sbin

2017-07-13 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6821:
-
Summary: Move FederationStateStore SQL DDL files from test resource to sbin 
 (was: Move FederationStateStore SQL DDL files from test to main)

> Move FederationStateStore SQL DDL files from test resource to sbin
> --
>
> Key: YARN-6821
> URL: https://issues.apache.org/jira/browse/YARN-6821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6821-YARN-2915-v1.patch
>
>
> The FederationStateStore SQL DDL files are currently in _src/test_ as there's 
> no compile time dependency. This jira proposes to move them to _src/main_ to 
> ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5731) Preemption calculation is not accurate when reserved containers are present in queue.

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086526#comment-16086526
 ] 

Hadoop QA commented on YARN-5731:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 9 new + 48 unchanged - 1 fixed = 57 total (was 49) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 58s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestRMRestart |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
| JDK v1.7.0_131 Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.reservation.TestFairSchedulerPlanFollower
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | YARN-5731 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877153/YARN-5731.branch-2.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-13 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086522#comment-16086522
 ] 

Nathan Roberts commented on YARN-6768:
--

Probably don't need to calculate full numDigits. once you have minimumDigits, 
you're done. 

> Improve performance of yarn api record toString and fromString
> --
>
> Key: YARN-6768
> URL: https://issues.apache.org/jira/browse/YARN-6768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-6768.1.patch, YARN-6768.2.patch, YARN-6768.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6821) Move FederationStateStore SQL DDL files from test to main

2017-07-13 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086489#comment-16086489
 ] 

Carlo Curino commented on YARN-6821:


Thank Subru for the contribution. Can I suggest to place them not within the 
.jar? Somewhere like bin/ or sbin/ so that users don't need to decompress the 
jar to use them. Other than that patch (obviously) looks good.

> Move FederationStateStore SQL DDL files from test to main
> -
>
> Key: YARN-6821
> URL: https://issues.apache.org/jira/browse/YARN-6821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6821-YARN-2915-v1.patch
>
>
> The FederationStateStore SQL DDL files are currently in _src/test_ as there's 
> no compile time dependency. This jira proposes to move them to _src/main_ to 
> ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5947) Create LeveldbConfigurationStore class using Leveldb as backing store

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086444#comment-16086444
 ] 

Hadoop QA commented on YARN-5947:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5734 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  5m 
49s{color} | {color:red} root in YARN-5734 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m  
5s{color} | {color:red} hadoop-yarn in YARN-5734 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} YARN-5734 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m  
8s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m  8s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 30 new + 288 unchanged - 0 fixed = 318 total (was 288) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 4 new + 859 unchanged - 0 fixed = 863 total (was 859) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.LeveldbConfigurationStore.initialize(Configuration,
 Configuration)  At 
LeveldbConfigurationStore.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.LeveldbConfigurationStore.initialize(Configuration,
 Configuration)  At LeveldbConfigurationStore.java:[line 77] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.LeveldbConfigurationStore.initialize(Configuration,
 Configuration):in 

[jira] [Commented] (YARN-3895) Support ACLs in ATSv2

2017-07-13 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086424#comment-16086424
 ] 

Vrushali C commented on YARN-3895:
--

Filed YARN-6820 for adding in a basic read size restriction. 

> Support ACLs in ATSv2
> -
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>
> This JIRA is to keep track of authorization support design discussions for 
> both readers and collectors. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6820) Restrict read access to timelineservice v2 data

2017-07-13 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086420#comment-16086420
 ] 

Vrushali C commented on YARN-6820:
--

YARN-3895 will add support for ACLs. I am expecting this jira YARN-6820 to be a 
simpler fix till the ACLs can be done. 

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>  Labels: yarn-5355-merge-blocker
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6815) [Bug] FederationStateStoreFacade return behavior should be consistent irrespective of whether caching is enabled or not

2017-07-13 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086397#comment-16086397
 ] 

Carlo Curino commented on YARN-6815:


Hi Subru, thanks for the contribution!

Few minor issues about logging/exception throwing:
# {{RouterPolicyFacade (165)}} you should have the try/catch also for the 
default-key, in case there are SQL errors (to avoid availability issues)
# {{RouterPolicyFacade}} we should use {{LOG.warn}}, as the fact that there are 
exception in FederationStateStore is bad and we should log it
# {{SQLFederationStateStore (405)}} we should throw exception for non-null 
{{subclusterInfo}}, and simply LOG.warn for null subclusterInfo. 

Other than this I am +1 on this patch.

> [Bug] FederationStateStoreFacade return behavior should be consistent 
> irrespective of whether caching is enabled or not
> ---
>
> Key: YARN-6815
> URL: https://issues.apache.org/jira/browse/YARN-6815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6815-YARN-2915.v1.patch, 
> YARN-6815-YARN-2915.v2.patch
>
>
> {{FederationStateStoreFacade::getSubCluster/getPolicyConfiguration}} returns 
> null if caching is enabled or throws YarnException if caching is disabled if 
> the queried entity is absent. This jira proposes to make the return 
> consistent to ensure correctness of clients like {{RouterPolicyFacade}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5412) Create a proxy chain for ResourceManager REST API in the Router

2017-07-13 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086387#comment-16086387
 ] 

Giovanni Matteo Fumarola commented on YARN-5412:


[~curino] thanks for your feedback. 
v3 addresses all of these.

> Create a proxy chain for ResourceManager REST API in the Router
> ---
>
> Key: YARN-5412
> URL: https://issues.apache.org/jira/browse/YARN-5412
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5412-YARN-2915.1.patch, 
> YARN-5412-YARN-2915.2.patch, YARN-5412-YARN-2915.3.patch, 
> YARN-5412-YARN-2915.proto.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ResourceManager REST API in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5412) Create a proxy chain for ResourceManager REST API in the Router

2017-07-13 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5412:
---
Attachment: YARN-5412-YARN-2915.3.patch

> Create a proxy chain for ResourceManager REST API in the Router
> ---
>
> Key: YARN-5412
> URL: https://issues.apache.org/jira/browse/YARN-5412
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5412-YARN-2915.1.patch, 
> YARN-5412-YARN-2915.2.patch, YARN-5412-YARN-2915.3.patch, 
> YARN-5412-YARN-2915.proto.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ResourceManager REST API in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6819) Application report fails if app rejected due to nodesize

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086342#comment-16086342
 ] 

Hadoop QA commented on YARN-6819:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 208 unchanged - 0 fixed = 209 total (was 208) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m  0s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877136/YARN-6819.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux af1ca81aaa55 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 945c095 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16425/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16425/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16425/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 

[jira] [Updated] (YARN-5731) Preemption calculation is not accurate when reserved containers are present in queue.

2017-07-13 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5731:
-
Attachment: YARN-5731.branch-2.002.patch

Attached branch-2 patch.

> Preemption calculation is not accurate when reserved containers are present 
> in queue.
> -
>
> Key: YARN-5731
> URL: https://issues.apache.org/jira/browse/YARN-5731
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0
>Reporter: Sunil G
>Assignee: Wangda Tan
> Attachments: YARN-5731.001.patch, YARN-5731.002.patch, 
> YARN-5731.branch-2.002.patch, YARN-5731-branch-2.8.001.patch
>
>
> YARN Capacity Scheduler does not kick Preemption under below scenario.
> Two queues A and B each with 50% capacity and 100% maximum capacity and user 
> limit factor 2. Minimum Container size is 1536MB and total cluster resource 
> is 40GB. Now submit the first job which needs 1536MB for AM and 9 task 
> containers each 4.5GB to queue A. Job will get 8 containers total (AM 1536MB 
> + 7 * 4.5GB = 33GB) and the cluster usage is 93.8% and the job has reserved a 
> container of 4.5GB.
> Now when next job (1536MB for AM and 9 task containers each 4.5GB) is 
> submitted onto queue B. The job hangs in ACCEPTED state forever and RM 
> scheduler never kicks in Preemption. (RM UI Image 2 attached)
> Test Case:
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue A --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> After a minute..
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue B --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> Credit to: [~Prabhu Joseph] for bug investigation and troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6654) RollingLevelDBTimelineStore backwards incompatible after fst upgrade

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086298#comment-16086298
 ] 

Hadoop QA commented on YARN-6654:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice:
 The patch generated 0 new + 3 unchanged - 2 fixed = 3 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
16s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6654 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12871258/YARN-6654.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 918f8e6134a2 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 945c095 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16426/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16426/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RollingLevelDBTimelineStore backwards incompatible after fst upgrade
> 
>
> Key: YARN-6654
> URL: https://issues.apache.org/jira/browse/YARN-6654
> 

[jira] [Commented] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-13 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086287#comment-16086287
 ] 

Eric Payne commented on YARN-5892:
--

The javadoc failures are because of the naming convention of the '_' display 
method:
{code}
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java:580:
 warning: '_' used as an identifier
{code}

The {{TestRMRestart}} failure is the same as HADOOP-14637.
{{resourcemanager.security.TestDelegationTokenRenewer}} and 
{{TestCapacityScheduler}} both pass for me in my local repo build.

I will cherry-pick this commit from trunk into branch-2.

> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-alpha3
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch, YARN-5892.branch-2.015.patch, 
> YARN-5892.branch-2.016.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6792) Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo

2017-07-13 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086290#comment-16086290
 ] 

Giovanni Matteo Fumarola commented on YARN-6792:


[~sunilg] thanks for the feedback - v2 addresses it.
Yetus reported 2 failed tests not related to my patch.

> Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo
> -
>
> Key: YARN-6792
> URL: https://issues.apache.org/jira/browse/YARN-6792
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Blocker
> Attachments: YARN-6792.v1.patch, YARN-6792.v2.patch
>
>
> NodeIDsInfo contains a typo and there is a missing constructor in 
> LabelsToNodesInfo. These bugs does not allow a correct conversation in XML of 
>  LabelsToNodesInfo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6821) Move FederationStateStore SQL DDL files from test to main

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086288#comment-16086288
 ] 

Hadoop QA commented on YARN-6821:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-2915 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} YARN-2915 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877137/YARN-6821-YARN-2915-v1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  |
| uname | Linux 803cf94aa884 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 590d959 |
| Default Java | 1.8.0_131 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16424/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16424/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Move FederationStateStore SQL DDL files from test to main
> -
>
> Key: YARN-6821
> URL: https://issues.apache.org/jira/browse/YARN-6821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6821-YARN-2915-v1.patch
>
>
> The FederationStateStore SQL DDL files are currently in _src/test_ as there's 
> no compile time dependency. This jira proposes to move them to _src/main_ to 
> ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-13 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086285#comment-16086285
 ] 

Jian He commented on YARN-6804:
---

- probably we can create a common util method in RegistryPathUtils, this can be 
used by both AM to post the component info, and NodeManager to supply the 
--host info.
{code}
containerId.replaceFirst("container_", "ctr-")
.replace("_", "-");
{code}
and the BaseServiceRecordProcessor#getContainerIDName no longer needs to 
replace container to ctr on read.

> Allow custom hostname for docker containers in native services
> --
>
> Key: YARN-6804
> URL: https://issues.apache.org/jira/browse/YARN-6804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6804-yarn-native-services.001.patch, 
> YARN-6804-yarn-native-services.002.patch
>
>
> Instead of the default random docker container hostname, we could set a more 
> user-friendly hostname for the container. The default could be a hostname 
> based on the container ID, with an option for the AM to provide a different 
> hostname. In the case of the native services AM, we could provide the 
> hostname that would be created by the registry DNS server. Regardless of 
> whether or not registry DNS is enabled, this would be a more useful hostname 
> for the docker container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6804) Allow custom hostname for docker containers in native services

2017-07-13 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086276#comment-16086276
 ] 

Jian He commented on YARN-6804:
---

lgtm, two minor comments:
- I think the check for if  “LOG.isInfoEnabled” is not required 
{code}
if (LOG.isInfoEnabled()) {
  LOG.info("setting hostname in container to: " + 
name);
}
{code}
- the network parameter is not used, and the added NETWORK_TYPE_BRIDGE seems 
also not used
{code}
private void setHostname(DockerRunCommand runCommand, String

containerIdStr, String network, String name)
{code}

> Allow custom hostname for docker containers in native services
> --
>
> Key: YARN-6804
> URL: https://issues.apache.org/jira/browse/YARN-6804
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-6804-yarn-native-services.001.patch, 
> YARN-6804-yarn-native-services.002.patch
>
>
> Instead of the default random docker container hostname, we could set a more 
> user-friendly hostname for the container. The default could be a hostname 
> based on the container ID, with an option for the AM to provide a different 
> hostname. In the case of the native services AM, we could provide the 
> hostname that would be created by the registry DNS server. Regardless of 
> whether or not registry DNS is enabled, this would be a more useful hostname 
> for the docker container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6654) RollingLevelDBTimelineStore backwards incompatible after fst upgrade

2017-07-13 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086266#comment-16086266
 ] 

Jason Lowe commented on YARN-6654:
--

Thanks for updating the patch [~jeagles]!

+1 lgtm.  Will commit this pending another Jenkins run since it's been a while.

> RollingLevelDBTimelineStore backwards incompatible after fst upgrade
> 
>
> Key: YARN-6654
> URL: https://issues.apache.org/jira/browse/YARN-6654
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: YARN-6654.1.patch, YARN-6654.2.patch, YARN-6654.3.patch
>
>
> There is a small minor backwards compatible change introduced while upgrading 
> fst library from 2.24 to 2.50.
> {code}
> Exception in thread "main" java.io.IOException: java.lang.RuntimeException: 
> unable to find class for code 83
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:243)
>   at 
> org.nustaq.serialization.FSTConfiguration.asObject(FSTConfiguration.java:1125)
>   at org.nustaq.serialization.FSTNoJackson.main(FSTNoJackson.java:31)
> Caused by: java.lang.RuntimeException: unable to find class for code 83
>   at 
> org.nustaq.serialization.FSTClazzNameRegistry.decodeClass(FSTClazzNameRegistry.java:180)
>   at 
> org.nustaq.serialization.coders.FSTStreamDecoder.readClass(FSTStreamDecoder.java:472)
>   at 
> org.nustaq.serialization.FSTObjectInput.readClass(FSTObjectInput.java:933)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:343)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.serializers.FSTArrayListSerializer.instantiate(FSTArrayListSerializer.java:63)
>   at 
> org.nustaq.serialization.FSTObjectInput.instantiateAndReadWithSer(FSTObjectInput.java:497)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:366)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.serializers.FSTMapSerializer.instantiate(FSTMapSerializer.java:78)
>   at 
> org.nustaq.serialization.FSTObjectInput.instantiateAndReadWithSer(FSTObjectInput.java:497)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectWithHeader(FSTObjectInput.java:366)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObjectInternal(FSTObjectInput.java:327)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:307)
>   at 
> org.nustaq.serialization.FSTObjectInput.readObject(FSTObjectInput.java:241)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6819) Application report fails if app rejected due to nodesize

2017-07-13 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086262#comment-16086262
 ] 

Bibin A Chundatt commented on YARN-6819:


Thank you [~rohithsharma] 
Handled all mentioned scenarios and added FT for the same.


> Application report fails if app rejected due to nodesize
> 
>
> Key: YARN-6819
> URL: https://issues.apache.org/jira/browse/YARN-6819
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6819.001.patch, YARN-6819.002.patch
>
>
> In YARN-5006 application rejected when nodesize limit is exceeded. 
> {{FinalSavingTransition}} stateBeforeFinalSaving  not set after skipping save 
> to store which causes application report failure



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6807) Adding required missing configs to Federation configuration guide based on e2e testing

2017-07-13 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086252#comment-16086252
 ] 

Subru Krishnan commented on YARN-6807:
--

A side-note, please don't remove any patches when you fix Yetus warnings or 
address comments. Instead keep bumping up the versions as we use to JIRA for 
the provenance record.

> Adding required missing configs to Federation configuration guide based on 
> e2e testing
> --
>
> Key: YARN-6807
> URL: https://issues.apache.org/jira/browse/YARN-6807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, federation
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Tanuj Nayak
> Attachments: YARN-6807-YARN-2915-v1.2.patch, 
> YARN-6807-YARN-2915-v1.patch
>
>
> We identified some missing configs that are required for e2e run. This JIRA 
> proposes to update the documentation to include the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6821) Move FederationStateStore SQL DDL files from test to main

2017-07-13 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-6821:
-
Attachment: YARN-6821-YARN-2915-v1.patch

Attaching a trivial patch that does the move.

> Move FederationStateStore SQL DDL files from test to main
> -
>
> Key: YARN-6821
> URL: https://issues.apache.org/jira/browse/YARN-6821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-6821-YARN-2915-v1.patch
>
>
> The FederationStateStore SQL DDL files are currently in _src/test_ as there's 
> no compile time dependency. This jira proposes to move them to _src/main_ to 
> ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6807) Adding required missing configs to Federation configuration guide based on e2e testing

2017-07-13 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086246#comment-16086246
 ] 

Subru Krishnan commented on YARN-6807:
--

Thanks [~tanujnay] for addressing my feedback. 

The latest patch is close, I just have a few minor comments:
* Looks like you missed one of my comments - {quote} Please clarify why we need 
the additional configs in client yarn-site.xml. {quote}
* Use the description below for *FederationRMFailoverProxyProvider*: {quote} 
The class used to connect to the RMs by looking up the membership information 
in federation state-store. This must be set if federation is enabled, even if 
RM HA is not enabled.  {quote}
* The command to start the router appears twice, you can remove the first 
occurrence as it's redundant.
* In the sample job, a note calling out to use a large enough number of mappers 
so that the job requires to be federated, i.e. larger than a single cluster 
(happens to be 16 in the example) will be useful.
* In the output, please include more context (map/reduce progress) and more 
importantly clearly call out that there's no code change or even recompile 
needed to run existing sample and output is same.
* I am working on YARN-6821 which reminded me that:
** we have to explicitly call out running the SQL scripts to create the tables 
and stored proc. Can you do that as part of a new _FederationStateStore_ 
sub-section as part of the configuration or before starting the clusters.
** in the main _FederationStateStore_ section, we have to call out that before 
we describe the GPG that this part of future work and link to YARN-5597.

> Adding required missing configs to Federation configuration guide based on 
> e2e testing
> --
>
> Key: YARN-6807
> URL: https://issues.apache.org/jira/browse/YARN-6807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation, federation
>Affects Versions: YARN-2915
>Reporter: Subru Krishnan
>Assignee: Tanuj Nayak
> Attachments: YARN-6807-YARN-2915-v1.2.patch, 
> YARN-6807-YARN-2915-v1.patch
>
>
> We identified some missing configs that are required for e2e run. This JIRA 
> proposes to update the documentation to include the same.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6819) Application report fails if app rejected due to nodesize

2017-07-13 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6819:
---
Attachment: YARN-6819.002.patch

> Application report fails if app rejected due to nodesize
> 
>
> Key: YARN-6819
> URL: https://issues.apache.org/jira/browse/YARN-6819
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6819.001.patch, YARN-6819.002.patch
>
>
> In YARN-5006 application rejected when nodesize limit is exceeded. 
> {{FinalSavingTransition}} stateBeforeFinalSaving  not set after skipping save 
> to store which causes application report failure



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086240#comment-16086240
 ] 

Hadoop QA commented on YARN-5892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m  
2s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 16 new + 663 unchanged - 1 fixed = 679 total (was 664) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_131
 with JDK v1.8.0_131 generated 4 new + 877 unchanged - 0 fixed = 881 total (was 
877) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 18s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| 

[jira] [Assigned] (YARN-6821) Move FederationStateStore SQL DDL files from test to main

2017-07-13 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan reassigned YARN-6821:


Assignee: Subru Krishnan

> Move FederationStateStore SQL DDL files from test to main
> -
>
> Key: YARN-6821
> URL: https://issues.apache.org/jira/browse/YARN-6821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>
> The FederationStateStore SQL DDL files are currently in _src/test_ as there's 
> no compile time dependency. This jira proposes to move them to _src/main_ to 
> ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6821) Move FederationStateStore SQL DDL files from test to main

2017-07-13 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-6821:


 Summary: Move FederationStateStore SQL DDL files from test to main
 Key: YARN-6821
 URL: https://issues.apache.org/jira/browse/YARN-6821
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Subru Krishnan


The FederationStateStore SQL DDL files are currently in _src/test_ as there's 
no compile time dependency. This jira proposes to move them to _src/main_ to 
ensure they are part of the distro.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6808) Allow Schedulers to return OPPORTUNISTIC containers when queues go over configured capacity

2017-07-13 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086186#comment-16086186
 ] 

Wangda Tan commented on YARN-6808:
--

[~asuresh], thanks for the detailed explanations. 

I can understand there're two separate targets, but I'm not sure how the 2 
targets related to each other.
1) Use opportunistic container to do lazy preemption in NM. (Is there any 
umbrella JIRA for this?)
2) Convert guaranteed request to opportunistic request when app's headroom 
reached.

Questions: 
1) Let's say app1 in an underutilized queue, which want to preempt containers 
from an over-utilized queue. Will preemption happens if app1 asks opportunistic 
container?
2) For target #1, who make the decision of moving guaranteed containers to 
opportunistic containers. If it is still decided by central RM, does that mean 
preemption logics in RM are same as today except kill operation is decided by 
NM side? 
3) For overall opportunistic container execution: If OC launch request will be 
queued by NM, it may wait a long time before get executed. In this case, do we 
need to modify AM code to: a. expect longer delay before think the launch 
fails. b. asks more resource on different hosts since there's no guaranteed 
launch time for OC? 

Comments for target #2. 
- What happens if an app doesn't want to ask opportunistic container when go 
beyond headroom? (Such as online services). I think this should be a per-app 
config (give me OC when I'm go beyond headroom).
- Existing patch makes static decision, which happens when new resource request 
added by AM. Should this be reconsidered when app's headroom changed over time?

Overall, I think this is a big feature and involves lots of components. 
Including a more detailed design doc can help contributors understand its scope 
and workflow.

> Allow Schedulers to return OPPORTUNISTIC containers when queues go over 
> configured capacity
> ---
>
> Key: YARN-6808
> URL: https://issues.apache.org/jira/browse/YARN-6808
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6808.001.patch
>
>
> This is based on discussions with [~kasha] and [~kkaranasos].
> Currently, when a Queues goes over capacity, apps on starved queues must wait 
> either for containers to complete or for them to be pre-empted by the 
> scheduler to get resources.
> This JIRA proposes to allow Schedulers to:
> # Allocate all containers over the configured queue capacity/weight as 
> OPPORTUNISTIC.
> # Auto-promote running OPPORTUNISTIC containers of apps as and when their 
> GUARANTEED containers complete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6820) Restrict read access to timelineservice v2 data

2017-07-13 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-6820:
-
Labels: yarn-5355-merge-blocker  (was: )

> Restrict read access to timelineservice v2 data 
> 
>
> Key: YARN-6820
> URL: https://issues.apache.org/jira/browse/YARN-6820
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>  Labels: yarn-5355-merge-blocker
>
> Need to provide a way to restrict read access in ATSv2. Not all users should 
> be able to read all entities. On the flip side, some folks may not need any 
> read restrictions, so we need to provide a way to disable this access 
> restriction as well. 
> Initially this access restriction could be done in a simple way via a 
> whitelist of users allowed to read data. That set of users can read all data, 
> no other user can read any data. Can be turned off for all users to read all 
> data.
> Could be stored in a "domain" table in hbase perhaps. Or a configuration 
> setting for the cluster. Or something else that's simple enough. ATSv1 has a 
> concept of domain for isolating users for reading. Would be good to keep that 
> in consideration. 
> In ATSv1, domain offers a namespace for Timeline server allowing users to 
> host multiple entities, isolating them from other users and applications. A 
> “Domain” in ATSV1 primarily stores owner info, read and& write ACL 
> information, created and modified time stamp information. Each Domain is 
> identified by an ID which must be unique across all users in the YARN cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6775) CapacityScheduler: Improvements to assignContainers, avoid unnecessary canAssignToUser/Queue calls

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086187#comment-16086187
 ] 

Hadoop QA commented on YARN-6775:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  9m 
21s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 12 new + 620 unchanged - 0 fixed = 632 total (was 620) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 43m 38s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestRMRestart |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | YARN-6775 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877112/YARN-6775.branch-2.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 42e3750299f0 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 

[jira] [Commented] (YARN-6689) PlacementRule should be configurable

2017-07-13 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086177#comment-16086177
 ] 

Jonathan Hung commented on YARN-6689:
-

Thanks Xuan/Wangda for the reviews and commit!

> PlacementRule should be configurable
> 
>
> Key: YARN-6689
> URL: https://issues.apache.org/jira/browse/YARN-6689
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6689.001.patch, YARN-6689.002.patch, 
> YARN-6689.003.patch, YARN-6689.004.patch
>
>
> YARN-3635 introduces PlacementRules for placing applications in queues. It is 
> currently hardcoded to one rule, {{UserGroupMappingPlacementRule}}. This 
> should be configurable as mentioned in the comments:{noformat}  private void 
> updatePlacementRules() throws IOException {
> List placementRules = new ArrayList<>();
> // Initialize UserGroupMappingPlacementRule
> // TODO, need make this defineable by configuration.
> UserGroupMappingPlacementRule ugRule = getUserGroupMappingPlacementRule();
> if (null != ugRule) {
>   placementRules.add(ugRule);
> }
> rmContext.getQueuePlacementManager().updateRules(placementRules);
>   }{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6820) Restrict read access to timelineservice v2 data

2017-07-13 Thread Vrushali C (JIRA)
Vrushali C created YARN-6820:


 Summary: Restrict read access to timelineservice v2 data 
 Key: YARN-6820
 URL: https://issues.apache.org/jira/browse/YARN-6820
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vrushali C


Need to provide a way to restrict read access in ATSv2. Not all users should be 
able to read all entities. On the flip side, some folks may not need any read 
restrictions, so we need to provide a way to disable this access restriction as 
well. 

Initially this access restriction could be done in a simple way via a whitelist 
of users allowed to read data. That set of users can read all data, no other 
user can read any data. Can be turned off for all users to read all data.

Could be stored in a "domain" table in hbase perhaps. Or a configuration 
setting for the cluster. Or something else that's simple enough. ATSv1 has a 
concept of domain for isolating users for reading. Would be good to keep that 
in consideration. 

In ATSv1, domain offers a namespace for Timeline server allowing users to host 
multiple entities, isolating them from other users and applications. A “Domain” 
in ATSV1 primarily stores owner info, read and& write ACL information, created 
and modified time stamp information. Each Domain is identified by an ID which 
must be unique across all users in the YARN cluster.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6775) CapacityScheduler: Improvements to assignContainers, avoid unnecessary canAssignToUser/Queue calls

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086117#comment-16086117
 ] 

Hudson commented on YARN-6775:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12001 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12001/])
YARN-6775. CapacityScheduler: Improvements to assignContainers, avoid (wangda: 
rev 945c0958bb8df3dd9d5f1467f1216d2e6b0ee3d8)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/ActivitiesLogger.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java


> CapacityScheduler: Improvements to assignContainers, avoid unnecessary 
> canAssignToUser/Queue calls
> --
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6775.001.patch, YARN-6775.002.patch, 
> YARN-6775.branch-2.002.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-07-13 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086090#comment-16086090
 ] 

Eric Payne commented on YARN-2113:
--

Committed the cherry-pick to branch-2.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.3
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.0020.patch, 
> YARN-2113.branch-2.0021.patch, YARN-2113.branch-2.8.0019.patch, 
> YARN-2113.branch-2.8.0020.patch, YARN-2113 Intra-QueuePreemption 
> Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-07-13 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086086#comment-16086086
 ] 

Wangda Tan commented on YARN-6788:
--

Thanks [~sunilg] for updating the patch, some comments: 

AbstractResource: 
- It's better to rename it to BaseResource since it is not abstract.
- {{long memory, long vcores}} are not read in constructor.
- I think longer term goal is to extend AbstractResource to more than 2 
resource types correct? If you agree so, could you add a TODO comment in the 
code?  

And existing patch still has lots of map looking operations, I'm not sure how 
this may affect performance. 

Copy-paste my comment to here: 
{code}
Resources/DominantResourceCalculator (Maybe there're more places could be 
changed)
Now they're using either setResourceValue(name, value), or 
getResourceInformation(rName). Both of them will do frequent map-looking 
operations. Instead of doing this, can we add a public (marked as @private) 
APIs to Resource object, which support get/setResourceInformation/Value with 
index. Internally we can use it to do computations.
To me all name-related fields should not be used while doing computations. 
String-based names should be only used for human-readability, such as 
UI/message, etc.
{code}

Considering size of the patch, I think we can get some performance results 
before changing the code.

> Improve performance of resource profile branch
> --
>
> Key: YARN-6788
> URL: https://issues.apache.org/jira/browse/YARN-6788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6788-YARN-3926.001.patch, 
> YARN-6788-YARN-3926.002.patch, YARN-6788-YARN-3926.003.patch, 
> YARN-6788-YARN-3926.004.patch
>
>
> Currently we could see a 15% performance delta with this branch. 
> Few performance improvements to improve the same.
> Also this patch will handle 
> [comments|https://issues.apache.org/jira/browse/YARN-6761?focusedCommentId=16075418=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16075418]
>  from [~leftnoteasy].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6768) Improve performance of yarn api record toString and fromString

2017-07-13 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086084#comment-16086084
 ] 

Jason Lowe commented on YARN-6768:
--

Had an offline discussion with [~jeagles] and [~daryn], and one item that came 
up during the discussion was whether we could eliminate both the state (and 
thus thread safety issues) _and_ the array copy occurring in StringBuilder by 
doing an algorithm Daryn described like the following (note I have not tested 
this):
{code}
  public static StringBuilder format(StringBuilder sb, long value, int 
minimumDigits) {
if (value < 0) {
  sb.append('-');
  value = -value;
}

int numDigits = 0;
int tmp = value;
do {
  ++numDigits;
  tmp /= 10;
} while (tmp > 0);

for (int i = minimumDigits - numDigits; i > 0; --i) {
  sb.append('0');
}

sb.append(value);
return sb;
  }
{code}

This has a little bit more computation to compute the number of digits, but it 
avoids both thread-local lookups and temp buffer allocation.

> Improve performance of yarn api record toString and fromString
> --
>
> Key: YARN-6768
> URL: https://issues.apache.org/jira/browse/YARN-6768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-6768.1.patch, YARN-6768.2.patch, YARN-6768.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6792) Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086082#comment-16086082
 ] 

Hadoop QA commented on YARN-6792:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 5 unchanged - 2 fixed = 5 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m  2s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6792 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877113/YARN-6792.v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7779ccffa92c 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b61ab85 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16421/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16421/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16421/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Updated] (YARN-6775) CapacityScheduler: Improvements to assignContainers, avoid unnecessary canAssignToUser/Queue calls

2017-07-13 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6775:
-
Fix Version/s: 3.0.0-beta1

> CapacityScheduler: Improvements to assignContainers, avoid unnecessary 
> canAssignToUser/Queue calls
> --
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6775.001.patch, YARN-6775.002.patch, 
> YARN-6775.branch-2.002.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6775) CapacityScheduler: Improvements to assignContainers, avoid unnecessary canAssignToUser/Queue calls

2017-07-13 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086060#comment-16086060
 ] 

Wangda Tan commented on YARN-6775:
--

Committed to trunk, thanks [~nroberts]!

> CapacityScheduler: Improvements to assignContainers, avoid unnecessary 
> canAssignToUser/Queue calls
> --
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6775.001.patch, YARN-6775.002.patch, 
> YARN-6775.branch-2.002.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6775) CapacityScheduler: Improvements to assignContainers, avoid unnecessary canAssignToUser/Queue calls

2017-07-13 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6775:
-
Summary: CapacityScheduler: Improvements to assignContainers, avoid 
unnecessary canAssignToUser/Queue calls  (was: CapacityScheduler: Improvements 
to assignContainers())

> CapacityScheduler: Improvements to assignContainers, avoid unnecessary 
> canAssignToUser/Queue calls
> --
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: YARN-6775.001.patch, YARN-6775.002.patch, 
> YARN-6775.branch-2.002.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-13 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086052#comment-16086052
 ] 

Yufei Gu commented on YARN-6769:


Hi [~daemon], we usually put contributor's name in git logs, something like 
"YARN-6769.  (Daemon via Yufei Gu)". I assume daemon is not your really 
name. Do you want to use your really name in the git log? If Yes, please send 
me your name. Thanks. 

> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Assignee: daemon
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: YARN-6769.001.patch, YARN-6769.002.patch, 
> YARN-6769.003.patch, YARN-6769.004.patch
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6769) Put the no demand queue after the most in FairSharePolicy#compare

2017-07-13 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086013#comment-16086013
 ] 

Yufei Gu commented on YARN-6769:


LGTM. +1. 

> Put the no demand queue after the most in FairSharePolicy#compare
> -
>
> Key: YARN-6769
> URL: https://issues.apache.org/jira/browse/YARN-6769
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.7.2
>Reporter: daemon
>Assignee: daemon
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: YARN-6769.001.patch, YARN-6769.002.patch, 
> YARN-6769.003.patch, YARN-6769.004.patch
>
>
> When use fairsheduler as RM scheduler, before assign container we will sort 
> all queues or applications. 
> We will use FairSharePolicy#compare as the comparator, but the comparator is 
> not so perfect.
> It have a problem as blow:
> 1. when a queue use resource over minShare(minResources), it will put behind 
> the queue whose demand is zeor.
> so it will greater opportunity to get the resource although it do not want. 
> It will waste schedule time when assign container
> to queue or application.
> I have fix it, and I will upload the patch to the jira.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6134) [Security] Regenerate delegation token for app just before token expires if app collector is active

2017-07-13 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6134:
---
Labels: yarn-5355-merge-blocker  (was: )

> [Security] Regenerate delegation token for app just before token expires if 
> app collector is active
> ---
>
> Key: YARN-6134
> URL: https://issues.apache.org/jira/browse/YARN-6134
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6133) [Security] Renew delegation token for app automatically if an app collector is active

2017-07-13 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6133:
---
Labels: yarn-5355-merge-blocker  (was: )

> [Security] Renew delegation token for app automatically if an app collector 
> is active
> -
>
> Key: YARN-6133
> URL: https://issues.apache.org/jira/browse/YARN-6133
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-13 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6130:
---
Labels: yarn-5355-merge-blocker  (was: )

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-13 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-5892:
-
Attachment: YARN-5892.branch-2.016.patch

Submitting patch with cherry-pick from trunk to branch-2 so that pre-commit 
build can run on it.

> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-alpha3
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch, YARN-5892.branch-2.015.patch, 
> YARN-5892.branch-2.016.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6792) Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo

2017-07-13 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-6792:
---
Attachment: YARN-6792.v2.patch

> Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo
> -
>
> Key: YARN-6792
> URL: https://issues.apache.org/jira/browse/YARN-6792
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Blocker
> Attachments: YARN-6792.v1.patch, YARN-6792.v2.patch
>
>
> NodeIDsInfo contains a typo and there is a missing constructor in 
> LabelsToNodesInfo. These bugs does not allow a correct conversation in XML of 
>  LabelsToNodesInfo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6775) CapacityScheduler: Improvements to assignContainers()

2017-07-13 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated YARN-6775:
-
Attachment: YARN-6775.branch-2.002.patch

> CapacityScheduler: Improvements to assignContainers()
> -
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: YARN-6775.001.patch, YARN-6775.002.patch, 
> YARN-6775.branch-2.002.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6760) ZKRMStateStore.constructZkRootNodeACL seems to not set acls as desired

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085949#comment-16085949
 ] 

Hadoop QA commented on YARN-6760:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 44s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6760 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875427/YARN-6760.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bd86ba51bf02 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b61ab85 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/16419/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16419/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16419/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16419/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ZKRMStateStore.constructZkRootNodeACL seems to not set acls as desired
> --
>
> Key: YARN-6760
> URL: https://issues.apache.org/jira/browse/YARN-6760
> 

[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085932#comment-16085932
 ] 

Hadoop QA commented on YARN-4455:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
39s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 6 new + 
43 unchanged - 5 fixed = 49 total (was 48) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 22 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
1s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-4455 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-07-13 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085920#comment-16085920
 ] 

Sunil G commented on YARN-6788:
---

seems some issue in test case. looking into it.

> Improve performance of resource profile branch
> --
>
> Key: YARN-6788
> URL: https://issues.apache.org/jira/browse/YARN-6788
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Blocker
> Attachments: YARN-6788-YARN-3926.001.patch, 
> YARN-6788-YARN-3926.002.patch, YARN-6788-YARN-3926.003.patch, 
> YARN-6788-YARN-3926.004.patch
>
>
> Currently we could see a 15% performance delta with this branch. 
> Few performance improvements to improve the same.
> Also this patch will handle 
> [comments|https://issues.apache.org/jira/browse/YARN-6761?focusedCommentId=16075418=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16075418]
>  from [~leftnoteasy].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085917#comment-16085917
 ] 

Hadoop QA commented on YARN-4455:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
54s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 6 new + 
43 unchanged - 5 fixed = 49 total (was 48) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 22 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
47s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-4455 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-6819) Application report fails if app rejected due to nodesize

2017-07-13 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085910#comment-16085910
 ] 

Rohith Sharma K S commented on YARN-6819:
-

Thanks [~bibinchundatt] for the patch.. 

I see many potential issue after YARN-5006 which are not handled gracefully. 
Can you confirm all of these with UT/FT test cases?
# It looks like for rejected app, state transition is not complete. The 
application will remain final_saving state only. Final state is not updated!
# Since state transition is not complete, the application *finish time* will 
not be updated.

Comments on the patch
# Can you add proper log message with reason. The current log message bit 
confusing at least for me. 
# Add a FT test case to check proper transition is happened. And also verify 
with application report for this app. 

> Application report fails if app rejected due to nodesize
> 
>
> Key: YARN-6819
> URL: https://issues.apache.org/jira/browse/YARN-6819
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6819.001.patch
>
>
> In YARN-5006 application rejected when nodesize limit is exceeded. 
> {{FinalSavingTransition}} stateBeforeFinalSaving  not set after skipping save 
> to store which causes application report failure



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6788) Improve performance of resource profile branch

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085898#comment-16085898
 ] 

Hadoop QA commented on YARN-6788:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-3926 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
50s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
46s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} YARN-3926 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} YARN-3926 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-3926 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} YARN-3926 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 19 new + 96 unchanged - 16 fixed = 115 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
18s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
generated 3 new + 0 unchanged - 1 fixed = 3 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 26s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}178m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}238m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
|  |  org.apache.hadoop.yarn.api.records.impl.AbstractResource.getResources() 
may expose internal representation by returning AbstractResource.resources  At 
AbstractResource.java:by returning AbstractResource.resources  At 
AbstractResource.java:[line 120] |
|  |  Incorrect lazy initialization of static field 
org.apache.hadoop.yarn.util.resource.ResourceUtils.indexForResourceInformation 
in org.apache.hadoop.yarn.util.resource.ResourceUtils.updateResourceTypeIndex() 
 At ResourceUtils.java:field 

[jira] [Commented] (YARN-6759) TestRMRestart.testRMRestartWaitForPreviousAMToFinish is failing in trunk

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085891#comment-16085891
 ] 

Hadoop QA commented on YARN-6759:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 43m 
19s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6759 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877085/YARN-6759.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d543e6412835 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b61ab85 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16417/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16417/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestRMRestart.testRMRestartWaitForPreviousAMToFinish is failing in trunk
> 
>
> Key: YARN-6759
> URL: https://issues.apache.org/jira/browse/YARN-6759
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
> Attachments: YARN-6759.001.patch
>
>
> {code}
> java.lang.IllegalArgumentException: Total wait time should be greater than 
> check interval time
>   at 
> 

  1   2   >