[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-21 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299989#comment-16299989
 ] 

Sunil G commented on YARN-7676:
---

[~botong] Thanks for raising and thanks [~asuresh] and [~jlowe] for detailed 
clarification.
In line with same thoughts, application priority is designed in such a way that 
higher integer is considered as higher priority unlike container priority.



> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299165#comment-16299165
 ] 

genericqa commented on YARN-7676:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 18 unchanged - 1 fixed = 18 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 34s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyIntraQueueUserLimit
 |
|   | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF
 |
|   | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyIntraQueue
 |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.policy.TestFifoOrderingPolicy

[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299050#comment-16299050
 ] 

Botong Huang commented on YARN-7676:


Yes I agree. If there's no other proposal to avoid this confusing code, I guess 
we will just leave it as is. Thanks [~asuresh] and [~jlowe] for the fast 
response! 

> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299014#comment-16299014
 ] 

Jason Lowe commented on YARN-7676:
--

I'm not sure we can fix this in a backwards-compatible way.  The Priority class 
is simply a priority number with no built-in semantics on the ordering of those 
numbers.  Two systems decided to implement them differently.  It's not 
inherently broken since these Priority objects are completely separate, but it 
can be confusing.


> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299005#comment-16299005
 ] 

Botong Huang commented on YARN-7676:


Yeah, {{TestApplicationPriority}} indeed fails with v1 patch reversing the 
Priority order. So basically Application priority is using Priority assuming 
larger value means higher priority, but {{ResourceRequest}} is using Priority 
assuming smaller value means higher priority...

> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7676) Fix inconsistent priority ordering in Priority and SchedulerRequestKey

2017-12-20 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16298987#comment-16298987
 ] 

Arun Suresh commented on YARN-7676:
---

Thanks for raising this Botong.
[~leftnoteasy] / [~sunilg] will this affect application priority (ordering of 
the apps itself) ? Looking at {{TestApplicationPriority}} am guessing it would.

> Fix inconsistent priority ordering in Priority and SchedulerRequestKey
> --
>
> Key: YARN-7676
> URL: https://issues.apache.org/jira/browse/YARN-7676
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7676.v1.patch
>
>
> Today the priority ordering in _Priority.compareTo()_ and 
> _SchedulerRequestKey.compareTo()_ is inconsistent. Both _compareTo_ method is 
> trying to reverse the order: 
> P0.compareTo(P1) > 0, meaning priority wise P0 < P1. However, 
> SK(P0).comapreTo(SK(P1)) < 0, meaning priority wise SK(P0) > SK(P1). 
> This is attempting to fix that by undo both reversing logic. So that priority 
> wise P0 > P1 and SK(P0) > SK(P1). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org