[jira] [Commented] (YARN-10022) Create RM Rest API to validate a CapacityScheduler Configuration

2020-01-23 Thread Prabhu Joseph (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022724#comment-17022724
 ] 

Prabhu Joseph commented on YARN-10022:
--

[~kmarton] The latest patch  [^YARN-10022.002.patch]  looks good to me. Will do 
some more thorough testing and update. Have few comments

1. In CapacitySchedulerConfigValidator.java

There are blank lines in CapacitySchedulerConfigValidator(), 
validateCSConfiguration(), validatePlacementRules()

2. Can you include a test case in TestRMWebServicesConfigurationMutation to 
test validateAndGetSchedulerConfiguration



> Create RM Rest API to validate a CapacityScheduler Configuration
> 
>
> Key: YARN-10022
> URL: https://issues.apache.org/jira/browse/YARN-10022
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Kinga Marton
>Assignee: Kinga Marton
>Priority: Major
> Attachments: YARN-10022.001.patch, YARN-10022.002.patch, 
> YARN-10022.WIP.patch, YARN-10022.WIP2.patch
>
>
> RMWebService should expose a new api which gets a CapacityScheduler 
> Configuration as an input, validates it and returns success / failure.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9879) Allow multiple leaf queues with the same name in CS

2020-01-23 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022718#comment-17022718
 ] 

Peter Bacsko edited comment on YARN-9879 at 1/24/20 6:33 AM:
-

Hey folks, I can see that a lengthy conversation is already going on, but I'll 
try to keep my one short.

Regarding {{getQueueName()}} / {{getQueuePath()}}, it's up to you to decide, I 
don't have enough context right now.
I'm trying to be constructive from code readability standpoint.

Three things that stand out to me are the following:

#1
{{private final Map> ambiguousShortNames = new 
HashMap<>();}}

My question to [~shuzirra] is: do we need to keep track of what queues a short 
name is mapped to? Do we use this information anywhere? Because if we use it as 
a counter, then it's simply much easier to have a
{{private final Map leafCount = new HashMap<>();}}

And quite obviously you don't have ambiguity if leafCount == 1.

Because of this, the {{addShortNameMapping()}} is already a bit hard to grasp.

#2 I would synchronize the public method {{add()}}, not the private method.

To show what I was thinking of, here's how I'd code add/remove:

{noformat}
// Keep as it as
public synchronized void add(CSQueue queue) {
String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

fullNameQueues.put(fullName, queue);
if (queue instanceof LeafQueue) {
addShortNameMapping(shortName, fullName);
}
}


private void addShortNameMapping(String shortName, String fullName) {
// initialize if necessary
leafCount.computeIfAbsent(shortName, v -> 0);

if (leafCount.computeIfPresent(shortName, (k,v) -> v + 1) > 1) {
LOG.warn("Multiple mapping for queue {}!", shortName);
} else {
shortNameToFullName.put(shortName, fullName);
}
}

public void remove(CSQueue queue) {
//if no queue is specified, we can consider it already removed, also 
consistent
//with hashmap behaviour, so no new issues will be caused by it
if (queue == null) {
return;
}

String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

//removing from the full and short name maps as well
fullNameQueues.remove(fullName);

if (queue instanceof LeafQueue &&
leafCount.computeIfPresent(shortName, (k,v) -> v - 1) == 0) {
shortNameToFullName.remove(shortName);
}
}
{noformat}

#3 In {{get()}} is important to check ambiguous mappings, so an exception must 
be thrown if leafCount > 1.


was (Author: pbacsko):
Hey folks, I can see that a lengthy conversation is already going on, but I'll 
try to keep my one short.

Regarding {{getQueueName()}} / {{getQueuePath()}}, it's up to you to decide, I 
don't have enough context right now.
I'm trying to be constructive from code readability standpoint.

Three things that stand out to me are the following:

#1
{{private final Map> ambiguousShortNames = new 
HashMap<>();}}

My question to [~shuzirra] is: do we need to keep track of what queues a short 
name is mapped to? Do we use this information anywhere? Because if we use it as 
a counter, then it's simply much easier to have a
{{private final Map leafCount = new HashMap<>();}}

And quite obviously you don't have ambiguity if leafCount == 1.

Because of this, the {{addShortNameMapping()}} is already a bit hard to grasp.

#2 I would synchronize the public method {{add()}}, not the private method.

To show what I was thinking of, here's how I'd code add/remove:

{noformat}
// Keep as it as
public synchronized void add(CSQueue queue) {
String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

fullNameQueues.put(fullName, queue);
if (queue instanceof LeafQueue) {
addShortNameMapping(shortName, fullName);
}
}


private void addShortNameMapping(String shortName, String fullName) {
// initialize if necessary
leafCount.computeIfAbsent(shortName, k -> 0);

if (leafCount.computeIfPresent(shortName, (k,v) -> v + 1) > 1) {
LOG.warn("Multiple mapping for queue {}!", shortName);
} else {
shortNameToFullName.put(shortName, fullName);
}
}

public void remove(CSQueue queue) {
//if no queue is specified, we can consider it already removed, also 
consistent
//with hashmap behaviour, so no new issues will be caused by it
if (queue == null) {
return;
}

String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

//removing from the full and short name maps as well
fullNameQueues.remove(fullName);

if (queue instanceof LeafQ

[jira] [Comment Edited] (YARN-9879) Allow multiple leaf queues with the same name in CS

2020-01-23 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022718#comment-17022718
 ] 

Peter Bacsko edited comment on YARN-9879 at 1/24/20 6:32 AM:
-

Hey folks, I can see that a lengthy conversation is already going on, but I'll 
try to keep my one short.

Regarding {{getQueueName()}} / {{getQueuePath()}}, it's up to you to decide, I 
don't have enough context right now.
I'm trying to be constructive from code readability standpoint.

Three things that stand out to me are the following:

#1
{{private final Map> ambiguousShortNames = new 
HashMap<>();}}

My question to [~shuzirra] is: do we need to keep track of what queues a short 
name is mapped to? Do we use this information anywhere? Because if we use it as 
a counter, then it's simply much easier to have a
{{private final Map leafCount = new HashMap<>();}}

And quite obviously you don't have ambiguity if leafCount == 1.

Because of this, the {{addShortNameMapping()}} is already a bit hard to grasp.

#2 I would synchronize the public method {{add()}}, not the private method.

To show what I was thinking of, here's how I'd code add/remove:

{noformat}
// Keep as it as
public synchronized void add(CSQueue queue) {
String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

fullNameQueues.put(fullName, queue);
if (queue instanceof LeafQueue) {
addShortNameMapping(shortName, fullName);
}
}


private void addShortNameMapping(String shortName, String fullName) {
// initialize if necessary
leafCount.computeIfAbsent(shortName, k -> 0);

if (leafCount.computeIfPresent(shortName, (k,v) -> v + 1) > 1) {
LOG.warn("Multiple mapping for queue {}!", shortName);
} else {
shortNameToFullName.put(shortName, fullName);
}
}

public void remove(CSQueue queue) {
//if no queue is specified, we can consider it already removed, also 
consistent
//with hashmap behaviour, so no new issues will be caused by it
if (queue == null) {
return;
}

String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

//removing from the full and short name maps as well
fullNameQueues.remove(fullName);

if (queue instanceof LeafQueue &&
leafCount.computeIfPresent(shortName, (k,v) -> v - 1) == 0) {
shortNameToFullName.remove(shortName);
}
}
{noformat}

#3 In {{get()}} is important to check ambiguous mappings, so an exception must 
be thrown if leafCount > 1.


was (Author: pbacsko):
Hey folks, I can see that a lengthy conversation is already going on, but I'll 
try to keep my one short.

Regarding {{getQueueName()}} / {{getQueuePath()}}, it's up to you to decide, I 
don't have enough context right now.
I'm trying to be constructive from code readability standpoint.

Three things that stand out to me are the following:

#1
{{private final Map> ambiguousShortNames = new 
HashMap<>();}}

My question to [~shuzirra] is: do we need to keep track of what queues a short 
name is mapped to? Do we use this information anywhere? Because if we use it as 
a counter, then it's simply much easier to have a
{{private final Map leafCount = new HashMap<>();}}

And quite obviously you don't have ambiguity if leafCount == 1.

Because of this, the {{addShortNameMapping()}} is already a bit hard to grasp.

#2 I would synchronize the public method {{add()}}, not the private method.

To show what I was thinking of, here's of how I'd code add/remove:

{noformat}
// Keep as it as
public synchronized void add(CSQueue queue) {
String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

fullNameQueues.put(fullName, queue);
if (queue instanceof LeafQueue) {
addShortNameMapping(shortName, fullName);
}
}


private void addShortNameMapping(String shortName, String fullName) {
// initialize if necessary
leafCount.computeIfAbsent(shortName, k -> 0);

if (leafCount.computeIfPresent(shortName, (k,v) -> v + 1) > 1) {
LOG.warn("Multiple mapping for queue {}!", shortName);
} else {
shortNameToFullName.put(shortName, fullName);
}
}

public void remove(CSQueue queue) {
//if no queue is specified, we can consider it already removed, also 
consistent
//with hashmap behaviour, so no new issues will be caused by it
if (queue == null) {
return;
}

String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

//removing from the full and short name maps as well
fullNameQueues.remove(fullName);

if (queue instanceof Le

[jira] [Comment Edited] (YARN-9879) Allow multiple leaf queues with the same name in CS

2020-01-23 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022718#comment-17022718
 ] 

Peter Bacsko edited comment on YARN-9879 at 1/24/20 6:22 AM:
-

Hey folks, I can see that a lengthy conversation is already going on, but I'll 
try to keep my one short.

Regarding {{getQueueName()}} / {{getQueuePath()}}, it's up to you to decide, I 
don't have enough context right now.
I'm trying to be constructive from code readability standpoint.

Three things that stand out to me are the following:

#1
{{private final Map> ambiguousShortNames = new 
HashMap<>();}}

My question to [~shuzirra] is: do we need to keep track of what queues a short 
name is mapped to? Do we use this information anywhere? Because if we use it as 
a counter, then it's simply much easier to have a
{{private final Map leafCount = new HashMap<>();}}

And quite obviously you don't have ambiguity if leafCount == 1.

Because of this, the {{addShortNameMapping()}} is already a bit hard to grasp.

#2 I would synchronize the public method {{add()}}, not the private method.

To show what I was thinking of, here's of how I'd code add/remove:

{noformat}
// Keep as it as
public synchronized void add(CSQueue queue) {
String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

fullNameQueues.put(fullName, queue);
if (queue instanceof LeafQueue) {
addShortNameMapping(shortName, fullName);
}
}


private void addShortNameMapping(String shortName, String fullName) {
// initialize if necessary
leafCount.computeIfAbsent(shortName, k -> 0);

if (leafCount.computeIfPresent(shortName, (k,v) -> v + 1) > 1) {
LOG.warn("Multiple mapping for queue {}!", shortName);
} else {
shortNameToFullName.put(shortName, fullName);
}
}

public void remove(CSQueue queue) {
//if no queue is specified, we can consider it already removed, also 
consistent
//with hashmap behaviour, so no new issues will be caused by it
if (queue == null) {
return;
}

String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

//removing from the full and short name maps as well
fullNameQueues.remove(fullName);

if (queue instanceof LeafQueue &&
leafCount.computeIfPresent(shortName, (k,v) -> v - 1) == 0) {
shortNameToFullName.remove(shortName);
}
}
{noformat}

#3 In {{get()}} is important to check ambiguous mappings, so an exception must 
be thrown if leafCount > 1.


was (Author: pbacsko):
Hey folks, I can see that a lengthy conversation is already going on, but I'll 
try to keep my one short.

Regarding {{getQueueName()}} / {{getQueuePath()}}, it's up to you to decide, I 
don't have enough context right now.
I'm trying to be constructive from code readability standpoint.

Two things that stand out to me are the following:
#1
{{private final Map> ambiguousShortNames = new 
HashMap<>();}}

My question to [~shuzirra] is: do we need to keep track of what queues a short 
name is mapped to? Do we use this information anywhere? Because if we use it as 
a counter, then it's simply much easier to have a
{{private final Map leafCount = new HashMap<>();}}

And quite obviously you don't have ambiguity if leafCount == 1.

Because of this, the {{addShortNameMapping()}} is already a bit hard to grasp.

#2 I would synchronize the public method {{add()}}, not the private method.

To show what I was thinking of, here's of how I'd code add/remove:

{noformat}
// Keep as it as
public synchronized void add(CSQueue queue) {
String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

fullNameQueues.put(fullName, queue);
if (queue instanceof LeafQueue) {
addShortNameMapping(shortName, fullName);
}
}


private void addShortNameMapping(String shortName, String fullName) {
// initialize if necessary
leafCount.computeIfAbsent(shortName, k -> 0);

if (leafCount.computeIfPresent(shortName, (k,v) -> v + 1) > 1) {
LOG.warn("Multiple mapping for queue {}!", shortName);
} else {
shortNameToFullName.put(shortName, fullName);
}
}

public void remove(CSQueue queue) {
//if no queue is specified, we can consider it already removed, also 
consistent
//with hashmap behaviour, so no new issues will be caused by it
if (queue == null) {
return;
}

String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

//removing from the full and short name maps as well
fullNameQueues.remove(fullName);

if (queue instanceof Le

[jira] [Commented] (YARN-9879) Allow multiple leaf queues with the same name in CS

2020-01-23 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022718#comment-17022718
 ] 

Peter Bacsko commented on YARN-9879:


Hey folks, I can see that a lengthy conversation is already going on, but I'll 
try to keep my one short.

Regarding {{getQueueName()}} / {{getQueuePath()}}, it's up to you to decide, I 
don't have enough context right now.
I'm trying to be constructive from code readability standpoint.

Two things that stand out to me are the following:
#1
{{private final Map> ambiguousShortNames = new 
HashMap<>();}}

My question to [~shuzirra] is: do we need to keep track of what queues a short 
name is mapped to? Do we use this information anywhere? Because if we use it as 
a counter, then it's simply much easier to have a
{{private final Map leafCount = new HashMap<>();}}

And quite obviously you don't have ambiguity if leafCount == 1.

Because of this, the {{addShortNameMapping()}} is already a bit hard to grasp.

#2 I would synchronize the public method {{add()}}, not the private method.

To show what I was thinking of, here's of how I'd code add/remove:

{noformat}
// Keep as it as
public synchronized void add(CSQueue queue) {
String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

fullNameQueues.put(fullName, queue);
if (queue instanceof LeafQueue) {
addShortNameMapping(shortName, fullName);
}
}


private void addShortNameMapping(String shortName, String fullName) {
// initialize if necessary
leafCount.computeIfAbsent(shortName, k -> 0);

if (leafCount.computeIfPresent(shortName, (k,v) -> v + 1) > 1) {
LOG.warn("Multiple mapping for queue {}!", shortName);
} else {
shortNameToFullName.put(shortName, fullName);
}
}

public void remove(CSQueue queue) {
//if no queue is specified, we can consider it already removed, also 
consistent
//with hashmap behaviour, so no new issues will be caused by it
if (queue == null) {
return;
}

String fullName = queue.getQueueName();
String shortName = queue.getQueueShortName();

//removing from the full and short name maps as well
fullNameQueues.remove(fullName);

if (queue instanceof LeafQueue &&
leafCount.computeIfPresent(shortName, (k,v) -> v - 1) == 0) {
shortNameToFullName.remove(shortName);
}
}
{noformat}

#3 In {{get()}} is important to check ambiguous mappings, so an exception must 
be thrown if leafCount > 1.

> Allow multiple leaf queues with the same name in CS
> ---
>
> Key: YARN-9879
> URL: https://issues.apache.org/jira/browse/YARN-9879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: DesignDoc_v1.pdf, YARN-9879.POC001.patch
>
>
> Currently the leaf queue's name must be unique regardless of its position in 
> the queue hierarchy. 
> Design doc and first proposal is being made, I'll attach it as soon as it's 
> done.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default queue properly

2020-01-23 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10099:

Attachment: YARN-10099-002.patch

> FS-CS converter: handle allow-undeclared-pools and user-as-default queue 
> properly
> -
>
> Key: YARN-10099
> URL: https://issues.apache.org/jira/browse/YARN-10099
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10099-001.patch, YARN-10099-002.patch
>
>
> Based on the latest documentation, there are two important properties that 
> are ignored if we have placement rules:
> ||Property||Explanation||
> |yarn.scheduler.fair.allow-undeclared-pools|If this is true, new queues can 
> be created at application submission time, whether because they are specified 
> as the application’s queue by the submitter or because they are placed there 
> by the user-as-default-queue property. If this is false, any time an app 
> would be placed in a queue that is not specified in the allocations file, it 
> is placed in the “default” queue instead. Defaults to true. *If a queue 
> placement policy is given in the allocations file, this property is ignored.*|
> |yarn.scheduler.fair.user-as-default-queue|Whether to use the username 
> associated with the allocation as the default queue name, in the event that a 
> queue name is not specified. If this is set to “false” or unset, all jobs 
> have a shared default queue, named “default”. Defaults to true. *If a queue 
> placement policy is given in the allocations file, this property is ignored.*|
> Right now these settings affects the conversion regardless of the placement 
> rules. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime

2020-01-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022668#comment-17022668
 ] 

Hadoop QA commented on YARN-10084:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
12s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
51s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} branch-3.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-3.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 13s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher |
|   | 
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisherForV2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:0f25cbbb251 |
| JIRA Issue | YARN-10084 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachme

[jira] [Updated] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime

2020-01-23 Thread Eric Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-10084:
--
Attachment: YARN-10084.branch-3.2.005.patch

> Allow inheritance of max app lifetime / default app lifetime
> 
>
> Key: YARN-10084
> URL: https://issues.apache.org/jira/browse/YARN-10084
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.10.0, 3.2.1, 3.1.3
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-10084.001.patch, YARN-10084.002.patch, 
> YARN-10084.003.patch, YARN-10084.004.patch, YARN-10084.005.patch, 
> YARN-10084.branch-3.2.005.patch
>
>
> Currently, {{maximum-application-lifetime}} and 
> {{default-application-lifetime}} must be set for each leaf queue. If it is 
> not set for a particular leaf queue, then there will be no time limit on apps 
> running in that queue. It should be possible to set 
> {{yarn.scheduler.capacity.root.maximum-application-lifetime}} for the root 
> queue and allow child queues to override that value if desired.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime

2020-01-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022608#comment-17022608
 ] 

Hadoop QA commented on YARN-10084:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 87m 
23s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10084 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12991696/YARN-10084.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9bbdf04e4eb8 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 

[jira] [Updated] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime

2020-01-23 Thread Eric Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-10084:
--
Attachment: YARN-10084.005.patch

> Allow inheritance of max app lifetime / default app lifetime
> 
>
> Key: YARN-10084
> URL: https://issues.apache.org/jira/browse/YARN-10084
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.10.0, 3.2.1, 3.1.3
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-10084.001.patch, YARN-10084.002.patch, 
> YARN-10084.003.patch, YARN-10084.004.patch, YARN-10084.005.patch
>
>
> Currently, {{maximum-application-lifetime}} and 
> {{default-application-lifetime}} must be set for each leaf queue. If it is 
> not set for a particular leaf queue, then there will be no time limit on apps 
> running in that queue. It should be possible to set 
> {{yarn.scheduler.capacity.root.maximum-application-lifetime}} for the root 
> queue and allow child queues to override that value if desired.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10084) Allow inheritance of max app lifetime / default app lifetime

2020-01-23 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022516#comment-17022516
 ] 

Eric Payne commented on YARN-10084:
---

Thanks [~ebadger]. I uploaded patch 005 that includes new and improved unit 
tests.

> Allow inheritance of max app lifetime / default app lifetime
> 
>
> Key: YARN-10084
> URL: https://issues.apache.org/jira/browse/YARN-10084
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler
>Affects Versions: 2.10.0, 3.2.1, 3.1.3
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-10084.001.patch, YARN-10084.002.patch, 
> YARN-10084.003.patch, YARN-10084.004.patch, YARN-10084.005.patch
>
>
> Currently, {{maximum-application-lifetime}} and 
> {{default-application-lifetime}} must be set for each leaf queue. If it is 
> not set for a particular leaf queue, then there will be no time limit on apps 
> running in that queue. It should be possible to set 
> {{yarn.scheduler.capacity.root.maximum-application-lifetime}} for the root 
> queue and allow child queues to override that value if desired.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9790) Failed to set default-application-lifetime if maximum-application-lifetime is less than or equal to zero

2020-01-23 Thread Eric Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne resolved YARN-9790.
--
Fix Version/s: 2.10.1
   3.1.4
   3.2.2
   Resolution: Fixed

> Failed to set default-application-lifetime if maximum-application-lifetime is 
> less than or equal to zero
> 
>
> Key: YARN-9790
> URL: https://issues.apache.org/jira/browse/YARN-9790
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.1.4, 2.10.1
>
> Attachments: YARN-9790.001.patch, YARN-9790.002.patch, 
> YARN-9790.003.patch, YARN-9790.004.patch
>
>
> capacity-scheduler
> {code}
> ...
> yarn.scheduler.capacity.root.dev.maximum-application-lifetime=-1
> yarn.scheduler.capacity.root.dev.default-application-lifetime=604800
> {code}
> refreshQueue was failed as follows
> {code}
> 2019-08-28 15:21:57,423 WARN  resourcemanager.AdminService 
> (AdminService.java:logAndWrapException(910)) - Exception refresh queues.
> java.io.IOException: Failed to re-init queues : Default lifetime604800 can't 
> exceed maximum lifetime -1
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:477)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:423)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:394)
> at 
> org.apache.hadoop.yarn.server.api.impl.pb.service.ResourceManagerAdministrationProtocolPBServiceImpl.refreshQueues(ResourceManagerAdministrationProtocolPBServiceImpl.java:114)
> at 
> org.apache.hadoop.yarn.proto.ResourceManagerAdministrationProtocol$ResourceManagerAdministrationProtocolService$2.callBlockingMethod(ResourceManagerAdministrationProtocol.java:271)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Default 
> lifetime604800 can't exceed maximum lifetime -1
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:268)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.(LeafQueue.java:162)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.(LeafQueue.java:141)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:259)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:283)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.reinitializeQueues(CapacitySchedulerQueueManager.java:171)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitializeQueues(CapacityScheduler.java:726)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:472)
> ... 12 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9790) Failed to set default-application-lifetime if maximum-application-lifetime is less than or equal to zero

2020-01-23 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022467#comment-17022467
 ] 

Eric Payne commented on YARN-9790:
--

Backport completed.

> Failed to set default-application-lifetime if maximum-application-lifetime is 
> less than or equal to zero
> 
>
> Key: YARN-9790
> URL: https://issues.apache.org/jira/browse/YARN-9790
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Fix For: 3.3.0, 3.2.2, 3.1.4, 2.10.1
>
> Attachments: YARN-9790.001.patch, YARN-9790.002.patch, 
> YARN-9790.003.patch, YARN-9790.004.patch
>
>
> capacity-scheduler
> {code}
> ...
> yarn.scheduler.capacity.root.dev.maximum-application-lifetime=-1
> yarn.scheduler.capacity.root.dev.default-application-lifetime=604800
> {code}
> refreshQueue was failed as follows
> {code}
> 2019-08-28 15:21:57,423 WARN  resourcemanager.AdminService 
> (AdminService.java:logAndWrapException(910)) - Exception refresh queues.
> java.io.IOException: Failed to re-init queues : Default lifetime604800 can't 
> exceed maximum lifetime -1
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:477)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:423)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:394)
> at 
> org.apache.hadoop.yarn.server.api.impl.pb.service.ResourceManagerAdministrationProtocolPBServiceImpl.refreshQueues(ResourceManagerAdministrationProtocolPBServiceImpl.java:114)
> at 
> org.apache.hadoop.yarn.proto.ResourceManagerAdministrationProtocol$ResourceManagerAdministrationProtocolService$2.callBlockingMethod(ResourceManagerAdministrationProtocol.java:271)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Default 
> lifetime604800 can't exceed maximum lifetime -1
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:268)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.(LeafQueue.java:162)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.(LeafQueue.java:141)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:259)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:283)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.reinitializeQueues(CapacitySchedulerQueueManager.java:171)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitializeQueues(CapacityScheduler.java:726)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:472)
> ... 12 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9768) RM Renew Delegation token thread should timeout and retry

2020-01-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022401#comment-17022401
 ] 

Íñigo Goiri commented on YARN-9768:
---

Yes, let's play it safe here and run it again.

> RM Renew Delegation token thread should timeout and retry
> -
>
> Key: YARN-9768
> URL: https://issues.apache.org/jira/browse/YARN-9768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: Manikandan R
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9768.001.patch, YARN-9768.002.patch, 
> YARN-9768.003.patch, YARN-9768.004.patch, YARN-9768.005.patch, 
> YARN-9768.006.patch, YARN-9768.007.patch, YARN-9768.008.patch, 
> YARN-9768.009.patch
>
>
> Delegation token renewer thread in RM (DelegationTokenRenewer.java) renews 
> HDFS tokens received to check for validity and expiration time.
> This call is made to an underlying HDFS NN or Router Node (which has exact 
> APIs as HDFS NN). If one of the nodes is bad and the renew call is stuck the 
> thread remains stuck indefinitely. The thread should ideally timeout the 
> renewToken and retry from the client's perspective.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10043) FairOrderingPolicy Improvements

2020-01-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022381#comment-17022381
 ] 

Hadoop QA commented on YARN-10043:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 85m 
50s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10043 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12991680/YARN-10043.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6aaee507162a 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c1fa24 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25431/testReport/ |
| Max. process+thread count | 810 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25431/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> FairOrderingPolicy Improvements

[jira] [Commented] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default queue properly

2020-01-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022344#comment-17022344
 ] 

Hadoop QA commented on YARN-10099:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 4 unchanged - 0 fixed = 7 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 87m  
2s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10099 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12991671/YARN-10099-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux cbaf2e7c0e12 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c1fa24 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/25429/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25429/test

[jira] [Updated] (YARN-10103) Capacity scheduler: add support for create=true/false per mapping rule

2020-01-23 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10103:

Description: 
You can't ask Capacity Scheduler for a mapping to create a queue if it doesn't 
exist.

For example, this mapping would use the first rule if the queue exist. If it 
doesn't, then it proceeds to the next rule:
 {{u:%user:%primary_group.%user:create=false;u:%user%:root.default}}

Let's say user "alice" belongs to the "admins" group. It would first try to map 
{{root.admins.alice}}. But, if the queue doesn't exist, then it places the 
application into {{root.default}}.

  was:
You can't ask Capacity Scheduler for a mapping to create a queue if it doesn't 
exist.

For example, this mapping would use the first rule if the queue exist. If it 
doesn't, then it proceeds to the next rule.

Example:
{{u:%user:%primary_group.%user:create=false;u:%user%:root.default}}

Let's say user "alice" belongs to the "admins" group. It would first try to map 
{{root.admins.alice}}. But, if the queue doesn't exist, then it places the 
application into {{root.default}}.


> Capacity scheduler: add support for create=true/false per mapping rule
> --
>
> Key: YARN-10103
> URL: https://issues.apache.org/jira/browse/YARN-10103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Priority: Major
>
> You can't ask Capacity Scheduler for a mapping to create a queue if it 
> doesn't exist.
> For example, this mapping would use the first rule if the queue exist. If it 
> doesn't, then it proceeds to the next rule:
>  {{u:%user:%primary_group.%user:create=false;u:%user%:root.default}}
> Let's say user "alice" belongs to the "admins" group. It would first try to 
> map {{root.admins.alice}}. But, if the queue doesn't exist, then it places 
> the application into {{root.default}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10102) Capacity scheduler: add support for %specified mapping

2020-01-23 Thread Peter Bacsko (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022336#comment-17022336
 ] 

Peter Bacsko commented on YARN-10102:
-

[~maniraj...@gmail.com] haven't you worked on something similar by any chance?

> Capacity scheduler: add support for %specified mapping
> --
>
> Key: YARN-10102
> URL: https://issues.apache.org/jira/browse/YARN-10102
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Priority: Major
>
> The reduce the gap between Fair Scheduler and Capacity Scheduler, it's 
> reasonable to have a {{%specified}} mapping. This would be equivalent to the 
> {{}}  placement rule in FS, that is, use the queue that comes in 
> with the application submission context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10103) Capacity scheduler: add support for create=true/false per mapping rule

2020-01-23 Thread Peter Bacsko (Jira)
Peter Bacsko created YARN-10103:
---

 Summary: Capacity scheduler: add support for create=true/false per 
mapping rule
 Key: YARN-10103
 URL: https://issues.apache.org/jira/browse/YARN-10103
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Peter Bacsko


You can't ask Capacity Scheduler for a mapping to create a queue if it doesn't 
exist.

For example, this mapping would use the first rule if the queue exist. If it 
doesn't, then it proceeds to the next rule.

Example:
{{u:%user:%primary_group.%user:create=false;u:%user%:root.default}}

Let's say user "alice" belongs to the "admins" group. It would first try to map 
{{root.admins.alice}}. But, if the queue doesn't exist, then it places the 
application into {{root.default}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10102) Capacity scheduler: add support for combined %specified mapping

2020-01-23 Thread Peter Bacsko (Jira)
Peter Bacsko created YARN-10102:
---

 Summary: Capacity scheduler: add support for combined %specified 
mapping
 Key: YARN-10102
 URL: https://issues.apache.org/jira/browse/YARN-10102
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Peter Bacsko


The reduce the gap between Fair Scheduler and Capacity Scheduler, it's 
reasonable to have a {{%specified}} mapping. This would be equivalent to the 
{{}}  placement rule in FS, that is, use the queue that comes in 
with the application submission context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10102) Capacity scheduler: add support for %specified mapping

2020-01-23 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10102:

Summary: Capacity scheduler: add support for %specified mapping  (was: 
Capacity scheduler: add support for combined %specified mapping)

> Capacity scheduler: add support for %specified mapping
> --
>
> Key: YARN-10102
> URL: https://issues.apache.org/jira/browse/YARN-10102
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Priority: Major
>
> The reduce the gap between Fair Scheduler and Capacity Scheduler, it's 
> reasonable to have a {{%specified}} mapping. This would be equivalent to the 
> {{}}  placement rule in FS, that is, use the queue that comes in 
> with the application submission context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10101) Support listing of aggregated logs for containers belonging to an application attempt

2020-01-23 Thread Adam Antal (Jira)
Adam Antal created YARN-10101:
-

 Summary: Support listing of aggregated logs for containers 
belonging to an application attempt
 Key: YARN-10101
 URL: https://issues.apache.org/jira/browse/YARN-10101
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: log-aggregation, yarn
Affects Versions: 3.3.0
Reporter: Adam Antal
Assignee: Adam Antal


To display logs without access to the timeline server, we need an interface 
where we can query the list of containers with aggregated logs belonging to an 
application attempt.

We should add support for this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10049) FIFOOrderingPolicy Improvements

2020-01-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022281#comment-17022281
 ] 

Hadoop QA commented on YARN-10049:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m 
14s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10049 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12991320/YARN-10049.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5094db91af6c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c1fa24 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/25430/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/25430/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-

[jira] [Commented] (YARN-10029) Add option to UIv2 to get container logs from the new JHS API

2020-01-23 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022280#comment-17022280
 ] 

Adam Antal commented on YARN-10029:
---

UIv2 using the ATS has the following logic for obtaining a given logfile:
1. The user chooses an application, and goes to its log page.
2. The user chooses an application attempt belonging to the application.
3. The user chooses a container belonging to the application attempt.
4. The user chooses a log file belonging to the container.

1. is already given in the UI, 2 is supported by the UI (RM has API for this), 
and 4. is supported by the new log servlet pushed into JHS in YARN-10028.

The missing step is #3. There's no such API currently that would give the full 
list of containers belonging to an application (for running containers we have 
RM REST API). A new API should be implemented so that we will have every 
information in the UI, and would not depend on the ATS at all (regarding logs 
at least).

The missing API endpoint would ideally list the containers from logs point of 
view, so using {{LogAggregationController}}'s {{#readAggregatedLogsMeta}} call. 
That is not related to the ResourceManager at all, so it can be incorporated to 
the JHS as a side endpoint to support extracting logs.

> Add option to UIv2 to get container logs from the new JHS API
> -
>
> Key: YARN-10029
> URL: https://issues.apache.org/jira/browse/YARN-10029
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.2.1
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: YARN-10029.001.patch
>
>
> Provided the new API is ready to use (also integrated into JHS in 
> YARN-10028), we can add a new config option to UIv2 that would make the UIv2 
> to request logs from the JHS API similarly as the ATSv2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10094) Add configuration to support NM overuse in RM

2020-01-23 Thread Eric Payne (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022263#comment-17022263
 ] 

Eric Payne commented on YARN-10094:
---

YARN-1011 also seems to be very similar.

> Add configuration to support NM overuse in RM
> -
>
> Key: YARN-10094
> URL: https://issues.apache.org/jira/browse/YARN-10094
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: zhoukang
>Assignee: zhoukang
>Priority: Major
> Attachments: YARN-10094.001.patch
>
>
> In a large cluster , upgrade NM will cost too much time.
> Some times we want to support memory or cpu overuse from RM view.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10043) FairOrderingPolicy Improvements

2020-01-23 Thread Manikandan R (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-10043:

Attachment: YARN-10043.002.patch

> FairOrderingPolicy Improvements
> ---
>
> Key: YARN-10043
> URL: https://issues.apache.org/jira/browse/YARN-10043
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Manikandan R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-10043.001.patch, YARN-10043.002.patch
>
>
> FairOrderingPolicy can be improved by using some of the approaches (only 
> relevant) implemented in FairSharePolicy of FS. This improvement has 
> significance in FS to CS migration context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10043) FairOrderingPolicy Improvements

2020-01-23 Thread Manikandan R (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022258#comment-17022258
 ] 

Manikandan R commented on YARN-10043:
-

Fixed checkstyle, javadoc issues. Junit failure is not related to this patch.

> FairOrderingPolicy Improvements
> ---
>
> Key: YARN-10043
> URL: https://issues.apache.org/jira/browse/YARN-10043
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Manikandan R
>Assignee: Manikandan R
>Priority: Major
> Attachments: YARN-10043.001.patch, YARN-10043.002.patch
>
>
> FairOrderingPolicy can be improved by using some of the approaches (only 
> relevant) implemented in FairSharePolicy of FS. This improvement has 
> significance in FS to CS migration context.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10022) Create RM Rest API to validate a CapacityScheduler Configuration

2020-01-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022255#comment-17022255
 ] 

Hadoop QA commented on YARN-10022:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 105 unchanged - 9 fixed = 107 total (was 114) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m 
51s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10022 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12991662/YARN-10022.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b1c3123c5646 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6c1fa24 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/25428/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25428/testReport/ |
| Max. process+thread count | 809 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-s

[jira] [Updated] (YARN-10099) FS-CS converter: handle allow-undeclared-pools and user-as-default queue properly

2020-01-23 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-10099:

Attachment: YARN-10099-001.patch

> FS-CS converter: handle allow-undeclared-pools and user-as-default queue 
> properly
> -
>
> Key: YARN-10099
> URL: https://issues.apache.org/jira/browse/YARN-10099
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-10099-001.patch
>
>
> Based on the latest documentation, there are two important properties that 
> are ignored if we have placement rules:
> ||Property||Explanation||
> |yarn.scheduler.fair.allow-undeclared-pools|If this is true, new queues can 
> be created at application submission time, whether because they are specified 
> as the application’s queue by the submitter or because they are placed there 
> by the user-as-default-queue property. If this is false, any time an app 
> would be placed in a queue that is not specified in the allocations file, it 
> is placed in the “default” queue instead. Defaults to true. *If a queue 
> placement policy is given in the allocations file, this property is ignored.*|
> |yarn.scheduler.fair.user-as-default-queue|Whether to use the username 
> associated with the allocation as the default queue name, in the event that a 
> queue name is not specified. If this is set to “false” or unset, all jobs 
> have a shared default queue, named “default”. Defaults to true. *If a queue 
> placement policy is given in the allocations file, this property is ignored.*|
> Right now these settings affects the conversion regardless of the placement 
> rules. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-9790) Failed to set default-application-lifetime if maximum-application-lifetime is less than or equal to zero

2020-01-23 Thread Eric Payne (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reopened YARN-9790:
--

Reopening to backport to branch-3.2, 3.1, and 2.10.

> Failed to set default-application-lifetime if maximum-application-lifetime is 
> less than or equal to zero
> 
>
> Key: YARN-9790
> URL: https://issues.apache.org/jira/browse/YARN-9790
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9790.001.patch, YARN-9790.002.patch, 
> YARN-9790.003.patch, YARN-9790.004.patch
>
>
> capacity-scheduler
> {code}
> ...
> yarn.scheduler.capacity.root.dev.maximum-application-lifetime=-1
> yarn.scheduler.capacity.root.dev.default-application-lifetime=604800
> {code}
> refreshQueue was failed as follows
> {code}
> 2019-08-28 15:21:57,423 WARN  resourcemanager.AdminService 
> (AdminService.java:logAndWrapException(910)) - Exception refresh queues.
> java.io.IOException: Failed to re-init queues : Default lifetime604800 can't 
> exceed maximum lifetime -1
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:477)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:423)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshQueues(AdminService.java:394)
> at 
> org.apache.hadoop.yarn.server.api.impl.pb.service.ResourceManagerAdministrationProtocolPBServiceImpl.refreshQueues(ResourceManagerAdministrationProtocolPBServiceImpl.java:114)
> at 
> org.apache.hadoop.yarn.proto.ResourceManagerAdministrationProtocol$ResourceManagerAdministrationProtocolService$2.callBlockingMethod(ResourceManagerAdministrationProtocol.java:271)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Default 
> lifetime604800 can't exceed maximum lifetime -1
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.setupQueueConfigs(LeafQueue.java:268)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.(LeafQueue.java:162)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.(LeafQueue.java:141)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:259)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.parseQueue(CapacitySchedulerQueueManager.java:283)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager.reinitializeQueues(CapacitySchedulerQueueManager.java:171)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitializeQueues(CapacityScheduler.java:726)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.reinitialize(CapacityScheduler.java:472)
> ... 12 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9768) RM Renew Delegation token thread should timeout and retry

2020-01-23 Thread Manikandan R (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022152#comment-17022152
 ] 

Manikandan R commented on YARN-9768:


[~inigoiri]

I ran this test 5 times, but haven't come across this timeout issue. Only 1 
time, VM crash had occurred. In addition, I do see lot of

{{java.util.concurrent.ExecutionException: java.lang.ArithmeticException: / by 
zero}}

in logs. Seems it is related to YARN-9817 . Should we trigger Jenkins again and 
see?

> RM Renew Delegation token thread should timeout and retry
> -
>
> Key: YARN-9768
> URL: https://issues.apache.org/jira/browse/YARN-9768
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: Manikandan R
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9768.001.patch, YARN-9768.002.patch, 
> YARN-9768.003.patch, YARN-9768.004.patch, YARN-9768.005.patch, 
> YARN-9768.006.patch, YARN-9768.007.patch, YARN-9768.008.patch, 
> YARN-9768.009.patch
>
>
> Delegation token renewer thread in RM (DelegationTokenRenewer.java) renews 
> HDFS tokens received to check for validity and expiration time.
> This call is made to an underlying HDFS NN or Router Node (which has exact 
> APIs as HDFS NN). If one of the nodes is bad and the renew call is stuck the 
> thread remains stuck indefinitely. The thread should ideally timeout the 
> renewToken and retry from the client's perspective.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10027) Add ability for ATS (log servlet) to read logs of running apps

2020-01-23 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022143#comment-17022143
 ] 

Adam Antal commented on YARN-10027:
---

It turns out that this topic is more complicated.

- For finished apps, log aggregation controllers provides full support in 
reading logs.
- For running applications: every currently running container can be queried 
from RM's web UI's  
/ws/v1/cluster/apps/{app}/appattempts/{app_attempt}/containers endpoints. With 
the list of running containers, one can query the running containers' logs from 
the exposed log servlet either from RM, ATS, AHS or JHS web UI.
- But, what's the interesting thing is that there is no fully automated support 
for getting already finished container logs for a running application. The 
mentioned RM API endpoint does not expose the list of finished containers, only 
the running ones, while the RMAppAttemptImpl only the stores the recently 
finished ones that are sent along the RM-AM protocol, and immediately gets 
deleted after that AM receives the information.

If a user has a container id, it can still ask the logs for that finished 
container directly from the NodeManager where the container was running just by 
looking up the NodeManager's local folder, but there aren't internal data 
structure that keeps this information as far as I could see this, therefore a 
user can not get the list of finished container for an attempt. Also tracking 
container id's are not that simple as just increasing the last digits, like 
_0004 at the end of the container id, because there can be released containers 
before allocation that were never actually running.

Things to do: we should check whether the new log servlet that is integrated to 
the RM through the AppInfoProvider interface is able to find the NodeManager 
where a finished container has ended. If it's true, then the RM actually has 
some data structure keeping track of containers, but I think the scheduler will 
just forget about this information after the container is finished and it's 
resources are available again.

> Add ability for ATS (log servlet) to read logs of running apps
> --
>
> Key: YARN-10027
> URL: https://issues.apache.org/jira/browse/YARN-10027
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Minor
>
> Currently neither version of the AHS is able to read logs of running apps 
> (local logs of NodeManager). YARN log CLI is integrated with NodeManager to 
> extract local logs as well (see YARN-5224 for reference), the same should be 
> done for ATS.
> Some context:
>  The local log files are read by the server in 
> {{NMWebServices#getContainerLogFile}}. This is accessed by the YARN logs CLI 
> through REST using the /containers/\{containerid}/logs/\{filename} endpoint 
> in {{LogsCLI#getResponeFromNMWebService}}.
> If YARN-10026 we can pull the common code pieces out of those services, we 
> can implement this in the common log servlet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10022) Create RM Rest API to validate a CapacityScheduler Configuration

2020-01-23 Thread Kinga Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kinga Marton updated YARN-10022:

Attachment: YARN-10022.002.patch

> Create RM Rest API to validate a CapacityScheduler Configuration
> 
>
> Key: YARN-10022
> URL: https://issues.apache.org/jira/browse/YARN-10022
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Kinga Marton
>Assignee: Kinga Marton
>Priority: Major
> Attachments: YARN-10022.001.patch, YARN-10022.002.patch, 
> YARN-10022.WIP.patch, YARN-10022.WIP2.patch
>
>
> RMWebService should expose a new api which gets a CapacityScheduler 
> Configuration as an input, validates it and returns success / failure.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-10098) Add interface to get node iterators by scheduler key for AppPlacementAllocator

2020-01-23 Thread Bibin Chundatt (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin Chundatt resolved YARN-10098.
---
Resolution: Invalid

> Add interface to get node iterators by scheduler key for AppPlacementAllocator
> --
>
> Key: YARN-10098
> URL: https://issues.apache.org/jira/browse/YARN-10098
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Bibin Chundatt
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10022) Create RM Rest API to validate a CapacityScheduler Configuration

2020-01-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17022079#comment-17022079
 ] 

Hadoop QA commented on YARN-10022:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 17 new + 104 unchanged - 9 fixed = 121 total (was 113) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 83m  
1s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | YARN-10022 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12991645/YARN-10022.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d45b910bf60b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9520b2ad |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/25427/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25427/testReport/ |
| Max. process+thread count | 863 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn

[jira] [Commented] (YARN-10022) Create RM Rest API to validate a CapacityScheduler Configuration

2020-01-23 Thread Kinga Marton (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17021963#comment-17021963
 ] 

Kinga Marton commented on YARN-10022:
-

Thank you [~prabhujoseph] for checking the patch. I have uploaded a new one 
with the fixes + added some unit tests.

I have also opened a follow up issue (YARN-10100) for separating the validation 
part from the initialisation part.

> Create RM Rest API to validate a CapacityScheduler Configuration
> 
>
> Key: YARN-10022
> URL: https://issues.apache.org/jira/browse/YARN-10022
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Kinga Marton
>Assignee: Kinga Marton
>Priority: Major
> Attachments: YARN-10022.001.patch, YARN-10022.WIP.patch, 
> YARN-10022.WIP2.patch
>
>
> RMWebService should expose a new api which gets a CapacityScheduler 
> Configuration as an input, validates it and returns success / failure.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10100) [CS] Separate config validation steps from the update part

2020-01-23 Thread Kinga Marton (Jira)
Kinga Marton created YARN-10100:
---

 Summary: [CS] Separate config validation steps from the update part
 Key: YARN-10100
 URL: https://issues.apache.org/jira/browse/YARN-10100
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacity scheduler
Reporter: Kinga Marton
Assignee: Kinga Marton


In Capacity Scheduler initialisation/reinitialisation there are a lot of 
validation steps performed. Some of this steps are deeply hidden. Having this 
implementation is really hard to figure out what a valid configuration means.

Let's figure out what are exactly the validation steps and separate them from 
the update part.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10022) Create RM Rest API to validate a CapacityScheduler Configuration

2020-01-23 Thread Kinga Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kinga Marton updated YARN-10022:

Attachment: YARN-10022.001.patch

> Create RM Rest API to validate a CapacityScheduler Configuration
> 
>
> Key: YARN-10022
> URL: https://issues.apache.org/jira/browse/YARN-10022
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Kinga Marton
>Assignee: Kinga Marton
>Priority: Major
> Attachments: YARN-10022.001.patch, YARN-10022.WIP.patch, 
> YARN-10022.WIP2.patch
>
>
> RMWebService should expose a new api which gets a CapacityScheduler 
> Configuration as an input, validates it and returns success / failure.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org