[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-03 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278616#comment-17278616
 ] 

Andras Gyori commented on YARN-10532:
-

Thank you [~zhuqi] for taking the time and fixing the feedbacks! The overall 
logic seems good to me, but I still have some addition regarding the testing:
 * The new policy class should have its own tests, where you check the 
markedForDeletion and sentForDeletion sets with some mocked RM. Here you would 
not need to care about what RM does, when you send the event, just make sure 
the policy logic itself is correct. (check TestCapacitySchedulerLazyPreemption 
for example)
 * In TestCapacitySchedulerNewQueueAutoCreation checking the markedForDeletion 
set is a policy internal variable, which should be only checked in the policy 
test. In this test case, only the editSchedule should be invoked from the 
policy class, nothing else.
 * A test is missing for the case, when a queue has application, but policy is 
invoked. This is really important not to remove queues that have applications 
running, so it should be tested thoroughly.

> Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is 
> not being used
> 
>
> Key: YARN-10532
> URL: https://issues.apache.org/jira/browse/YARN-10532
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10532.001.patch, YARN-10532.002.patch, 
> YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, 
> YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, 
> YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, 
> YARN-10532.012.patch, YARN-10532.013.patch, YARN-10532.014.patch
>
>
> It's better if we can delete auto-created queues when they are not in use for 
> a period of time (like 5 mins). It will be helpful when we have a large 
> number of auto-created queues (e.g. from 500 users), but only a small subset 
> of queues are actively used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10612) Fix findbugs issue introduced in YARN-10585

2021-02-03 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278606#comment-17278606
 ] 

Szilard Nemeth commented on YARN-10612:
---

Hi [~shuzirra],
Thanks for the explanation.
Fix looks good to me, committed to trunk.
Resolving this jira.

> Fix findbugs issue introduced in YARN-10585
> ---
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10612) Fix findbugs issue introduced in YARN-10585

2021-02-03 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10612:
--
Fix Version/s: 3.4.0

> Fix findbugs issue introduced in YARN-10585
> ---
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10612) Fix findbugs issue introduced in YARN-10585

2021-02-03 Thread Szilard Nemeth (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-10612:
--
Summary: Fix findbugs issue introduced in YARN-10585  (was: Fix find bugs 
issue introduced in YARN-10585)

> Fix findbugs issue introduced in YARN-10585
> ---
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10036) Install yarnpkg and upgrade nodejs in Dockerfile

2021-02-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-10036:
-
Fix Version/s: 3.2.3
   2.10.2

Backported to branch-3.2 and branch-2.10 cleanly.
I couldn't backport this to branch-3.1 cleanly. Probably I need to backport 
other commits, and I'll investigate this.

> Install yarnpkg and upgrade nodejs in Dockerfile
> 
>
> Key: YARN-10036
> URL: https://issues.apache.org/jira/browse/YARN-10036
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: buid, yarn-ui-v2
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 2.10.2, 3.2.3
>
>
> Now node.js is installed in Dockerfile but yarnpkg is not installed.
> I'd like to run "yarn upgade" command in the build env to manage and upgrade 
> the dependencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10036) Install yarnpkg and upgrade nodejs in Dockerfile

2021-02-03 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278541#comment-17278541
 ] 

Akira Ajisaka commented on YARN-10036:
--

{noformat}
WARN engine npm@7.5.2: wanted: {"node":">=10"} (current: 
{"node":"4.2.6","npm":"3.5.2"})
WARN engine npm@7.5.2: wanted: {"node":">=10"} (current: 
{"node":"4.2.6","npm":"3.5.2"})
/usr/local/lib
`-- (empty)
npm ERR! Linux 3.10.0-1160.6.1.el7.x86_64
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install" "npm@latest" "-g"
npm ERR! node v4.2.6
npm ERR! npm  v3.5.2
npm ERR! path /usr/local/lib/node_modules/.staging/@npmcli/ci-detect-c7bf9552
npm ERR! code ENOENT
npm ERR! errno -2
npm ERR! syscall rename
npm ERR! enoent ENOENT: no such file or directory, rename 
'/usr/local/lib/node_modules/.staging/@npmcli/ci-detect-c7bf9552' -> 
'/usr/local/lib/node_modules/npm/node_modules/@npmcli/ci-detect'
npm ERR! enoent ENOENT: no such file or directory, rename 
'/usr/local/lib/node_modules/.staging/@npmcli/ci-detect-c7bf9552' -> 
'/usr/local/lib/node_modules/npm/node_modules/@npmcli/ci-detect'
npm ERR! enoent This is most likely not a problem with npm itself
npm ERR! enoent and is related to npm not being able to find a file.
npm ERR! enoent 
npm ERR! Please include the following file with any support request:
npm ERR! /root/npm-debug.log
npm ERR! code 1
The command '/bin/bash -o pipefail -c apt-get -q update && apt-get install 
-y --no-install-recommends nodejs npm && apt-get clean && rm -rf 
/var/lib/apt/lists/* && ln -s /usr/bin/nodejs /usr/bin/node && npm 
install npm@latest -g && npm install -g jshint' returned a non-zero code: 1
{noformat}
Now "npm install npm@latest -g" is failing, so I'll cherry-pick this to all the 
active branches.

> Install yarnpkg and upgrade nodejs in Dockerfile
> 
>
> Key: YARN-10036
> URL: https://issues.apache.org/jira/browse/YARN-10036
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: buid, yarn-ui-v2
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Now node.js is installed in Dockerfile but yarnpkg is not installed.
> I'd like to run "yarn upgade" command in the build env to manage and upgrade 
> the dependencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10036) Install yarnpkg and upgrade nodejs in Dockerfile

2021-02-03 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278542#comment-17278542
 ] 

Akira Ajisaka commented on YARN-10036:
--

Error log: https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/456/console

> Install yarnpkg and upgrade nodejs in Dockerfile
> 
>
> Key: YARN-10036
> URL: https://issues.apache.org/jira/browse/YARN-10036
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: buid, yarn-ui-v2
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Now node.js is installed in Dockerfile but yarnpkg is not installed.
> I'd like to run "yarn upgade" command in the build env to manage and upgrade 
> the dependencies.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10607) User environment is unable to prepend PATH when mapreduce.admin.user.env also sets PATH

2021-02-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278536#comment-17278536
 ] 

Hadoop QA commented on YARN-10607:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
14s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 2 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
45s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
14s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
43s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
13s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
28s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 37s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
28s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
53s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/585/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html{color}
 | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
6s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
6s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
7s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
7s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 

[jira] [Comment Edited] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278079#comment-17278079
 ] 

Qi Zhu edited comment on YARN-10610 at 2/4/21, 3:40 AM:


 [~snemeth]  [~shuzirra] 

The finding bug is not related to this change, and if check style warning 
should change, or just consistent to origin queueName field?

If you any other thoughts?

Thanks.

 


was (Author: zhuqi):
 [~snemeth]  [~shuzirra]

The finding bug is not related to this change, and i think the check style 
warning should not change, just consistent to origin queueName field.

Could you help to review for merge?

Thanks.

 

> Add queuePath to restful api for CapacityScheduler consistent with 
> FairScheduler queuePath.
> ---
>
> Key: YARN-10610
> URL: https://issues.apache.org/jira/browse/YARN-10610
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10610.001.patch, YARN-10610.002.patch, 
> image-2021-02-03-13-47-13-516.png
>
>
> The cs only have queueName, but not full queuePath.
> !image-2021-02-03-13-47-13-516.png|width=631,height=356!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10178) Global Scheduler async thread crash caused by 'Comparison method violates its general contract'

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274163#comment-17274163
 ] 

Qi Zhu edited comment on YARN-10178 at 2/4/21, 3:27 AM:


[~wangda] [~bteke]

I have updated a patch to sort PriorityQueueResourcesForSorting, and add 
reference to queue.

I also add tests to prevent side effect/regression.

After the performance test, i found  there seems no performance cost:

In mock performance test, there are two cases, mock 1000 queues, and mock 1 
queues.

1. And i am surprise that the queue size is 1000, the new structure sort fast 
than old queue sort, the gap less than 1s.

2. When the queue size is 1, the old queue sort fast than the new structure 
sort, but the gap is always less than 10s.

If you any thoughts about this?

Thanks a lot.

 


was (Author: zhuqi):
[~wangda] [~bteke]

I have updated a patch to sort PriorityQueueResourcesForSorting, and add 
reference to queue.

I also add tests to prevent side effect/regression.

After the performance test, i found the there seems no performance cost:

1. And i am surprise that the queue size is about 1000, the new sort fast than 
old.

2. When the queue size is huge big : 1, the old fast than the old, but the 
gap is always less than 10s.

If you any thoughts about this?

Thanks a lot.

 

> Global Scheduler async thread crash caused by 'Comparison method violates its 
> general contract'
> ---
>
> Key: YARN-10178
> URL: https://issues.apache.org/jira/browse/YARN-10178
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.2.1
>Reporter: tuyu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10178.001.patch, YARN-10178.002.patch, 
> YARN-10178.003.patch, YARN-10178.004.patch
>
>
> Global Scheduler Async Thread crash stack
> {code:java}
> ERROR org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received 
> RMFatalEvent of type CRITICAL_THREAD_CRASH, caused by a critical thread, 
> Thread-6066574, that exited unexpectedly: java.lang.IllegalArgumentException: 
> Comparison method violates its general contract!  
>at 
> java.util.TimSort.mergeHi(TimSort.java:899)
> at java.util.TimSort.mergeAt(TimSort.java:516)
> at java.util.TimSort.mergeForceCollapse(TimSort.java:457)
> at java.util.TimSort.sort(TimSort.java:254)
> at java.util.Arrays.sort(Arrays.java:1512)
> at java.util.ArrayList.sort(ArrayList.java:1462)
> at java.util.Collections.sort(Collections.java:177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.policy.PriorityUtilizationQueueOrderingPolicy.getAssignmentIterator(PriorityUtilizationQueueOrderingPolicy.java:221)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.sortAndGetChildrenAllocationIterator(ParentQueue.java:777)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainersToChildQueues(ParentQueue.java:791)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue.assignContainers(ParentQueue.java:623)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1635)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1629)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1732)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1481)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.schedule(CapacityScheduler.java:569)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$AsyncScheduleThread.run(CapacityScheduler.java:616)
> {code}
> JAVA 8 Arrays.sort default use timsort algo, and timsort has  few require 
> {code:java}
> 1.x.compareTo(y) != y.compareTo(x)
> 2.x>y,y>z --> x > z
> 3.x=y, x.compareTo(z) == y.compareTo(z)
> {code}
> if not Arrays paramters not satify this require,TimSort will throw 
> 'java.lang.IllegalArgumentException'
> look at PriorityUtilizationQueueOrderingPolicy.compare function,we will know 
> Capacity Scheduler use this these queue resource usage to compare
> {code:java}
> AbsoluteUsedCapacity
> UsedCapacity
> ConfiguredMinResource
> AbsoluteCapacity
> {cod

[jira] [Commented] (YARN-10611) Fix that shaded should be used for google guava imports in YARN-10352.

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278478#comment-17278478
 ] 

Qi Zhu commented on YARN-10611:
---

Thanks for [~ahussein] reply.

The finding bugs will be fixed in YARN-10612.

The TestDelegationTokenRenewer failure is not related,  it will be fixe in 
YARN-10500.

Thanks.

> Fix that shaded should be used for google guava imports in YARN-10352.
> --
>
> Key: YARN-10611
> URL: https://issues.apache.org/jira/browse/YARN-10611
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10611.001.patch
>
>
> Fix that shaded should be used for google guava imports in YARN-10352.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Gergely Pollak (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278453#comment-17278453
 ] 

Gergely Pollak edited comment on YARN-10612 at 2/4/21, 1:25 AM:


Trunk compile shows the findbugs error, patch compile doesn't so I think we can 
consider it fixed. The reason we don't have any tests for this change is the 
actual findbugs warning was caused by an unnecessary null check, and null 
inputs already have test cases.

Console log relevant part:
{code:java}


 findbugs detection: patch


...

Writing 
/home/jenkins/jenkins-agent/workspace/PreCommit-YARN-Build/out/combined-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.xmlhadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
{code}


was (Author: shuzirra):
Trunk compile shows the findbugs error, patch compile doesn't so I think we can 
consider it fixed. The reason we don't have any tests for this change is the 
actual findbugs warning was caused by an unnecessary null check, and null 
inputs already have test cases.

> Fix find bugs issue introduced in YARN-10585
> 
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Gergely Pollak (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278453#comment-17278453
 ] 

Gergely Pollak edited comment on YARN-10612 at 2/4/21, 1:18 AM:


Trunk compile shows the findbugs error, patch compile doesn't so I think we can 
consider it fixed. The reason we don't have any tests for this change is the 
actual findbugs warning was caused by an unnecessary null check, and null 
inputs already have test cases.


was (Author: shuzirra):
Trunk compile shows the findbugs error, patch compile doesn't so I think we can 
consider it fixed. The reason we don't have any tests for this change is the 
actual findbugs warning was caused by an unnecessary null check, and null 
inputs already have test cases.

 

 

 

> Fix find bugs issue introduced in YARN-10585
> 
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Gergely Pollak (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278453#comment-17278453
 ] 

Gergely Pollak commented on YARN-10612:
---

Trunk compile shows the findbugs error, patch compile doesn't so I think we can 
consider it fixed. The reason we don't have any tests for this change is the 
actual findbugs warning was caused by an unnecessary null check, and null 
inputs already have test cases.

 

 

 

> Fix find bugs issue introduced in YARN-10585
> 
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10607) User environment is unable to prepend PATH when mapreduce.admin.user.env also sets PATH

2021-02-03 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-10607:
---
Attachment: YARN-10607.002.patch

> User environment is unable to prepend PATH when mapreduce.admin.user.env also 
> sets PATH
> ---
>
> Key: YARN-10607
> URL: https://issues.apache.org/jira/browse/YARN-10607
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-10607.001.patch, YARN-10607.002.patch
>
>
> When using the tarball approach to ship relevant Hadoop jars to containers, 
> it is helpful to set {{mapreduce.admin.user.env}} to something like 
> {{PATH=./hadoop-tarball:\{\{PATH\}\}}} to make sure that all of the Hadoop 
> binaries are on the PATH. This way you can call {{hadoop}} instead of 
> {{./hadoop-tarball/hadoop}}. The intention here is to force prepend 
> {{./hadoop-tarball}} and then append the set {{PATH}} afterwards. But if a 
> user would like to override the appended portion of {{PATH}} in their 
> environment, they are unable to do so. This is because {{PATH}} ends up 
> getting parsed twice. Initially it is set via {{mapreduce.admin.user.env}} to 
> {{PATH=./hadoop-tarball:$SYS_PATH}}}. In this case {{SYS_PATH}} is what I'll 
> refer to as the normal system path. E.g. {{/usr/local/bin:/usr/bin}}, etc.
> After this, the user env parsing happens. For example, let's say the user 
> sets their {{PATH}} to {{PATH=.:$PATH}}. We have already parsed {{PATH}} from 
> the admin.user.env. Then we go to parse the user environment and find the 
> user also specified {{PATH}}. So {{$PATH}} ends up getting getting expanded 
> to {{./hadoop-tarball:$SYS_PATH}}, which leads to the user's {{PATH}} being 
> {{PATH=.:./hadoop-tarball:$SYS_PATH}}. We then append this to {{PATH}}, which 
> has already been set in the environment map via the admin.user.env. So we 
> finally end up with 
> {{PATH=./hadoop-tarball:$SYS_PATH:.:./hadoop-tarball:$SYS_PATH}}. 
> This normally isn't a huge deal, but if you want to ship a version of 
> python/perl/etc. that clashes with the one that is already there in 
> {{SYS_PATH}}, you will need to refer to it by its full path. Since in the 
> above example, {{.}} doesn't appear until after {{$SYS_PATH}}. This is a pain 
> and it should be possible to prepend its {{PATH}} to override the 
> system/container {{SYS_PATH}}, even when also forcefully prepending to 
> {{PATH}} with you hadoop tarball.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10607) User environment is unable to prepend PATH when mapreduce.admin.user.env also sets PATH

2021-02-03 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278443#comment-17278443
 ] 

Eric Badger commented on YARN-10607:


Attaching patch 002 that adds a unit test

> User environment is unable to prepend PATH when mapreduce.admin.user.env also 
> sets PATH
> ---
>
> Key: YARN-10607
> URL: https://issues.apache.org/jira/browse/YARN-10607
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-10607.001.patch, YARN-10607.002.patch
>
>
> When using the tarball approach to ship relevant Hadoop jars to containers, 
> it is helpful to set {{mapreduce.admin.user.env}} to something like 
> {{PATH=./hadoop-tarball:\{\{PATH\}\}}} to make sure that all of the Hadoop 
> binaries are on the PATH. This way you can call {{hadoop}} instead of 
> {{./hadoop-tarball/hadoop}}. The intention here is to force prepend 
> {{./hadoop-tarball}} and then append the set {{PATH}} afterwards. But if a 
> user would like to override the appended portion of {{PATH}} in their 
> environment, they are unable to do so. This is because {{PATH}} ends up 
> getting parsed twice. Initially it is set via {{mapreduce.admin.user.env}} to 
> {{PATH=./hadoop-tarball:$SYS_PATH}}}. In this case {{SYS_PATH}} is what I'll 
> refer to as the normal system path. E.g. {{/usr/local/bin:/usr/bin}}, etc.
> After this, the user env parsing happens. For example, let's say the user 
> sets their {{PATH}} to {{PATH=.:$PATH}}. We have already parsed {{PATH}} from 
> the admin.user.env. Then we go to parse the user environment and find the 
> user also specified {{PATH}}. So {{$PATH}} ends up getting getting expanded 
> to {{./hadoop-tarball:$SYS_PATH}}, which leads to the user's {{PATH}} being 
> {{PATH=.:./hadoop-tarball:$SYS_PATH}}. We then append this to {{PATH}}, which 
> has already been set in the environment map via the admin.user.env. So we 
> finally end up with 
> {{PATH=./hadoop-tarball:$SYS_PATH:.:./hadoop-tarball:$SYS_PATH}}. 
> This normally isn't a huge deal, but if you want to ship a version of 
> python/perl/etc. that clashes with the one that is already there in 
> {{SYS_PATH}}, you will need to refer to it by its full path. Since in the 
> above example, {{.}} doesn't appear until after {{$SYS_PATH}}. This is a pain 
> and it should be possible to prepend its {{PATH}} to override the 
> system/container {{SYS_PATH}}, even when also forcefully prepending to 
> {{PATH}} with you hadoop tarball.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278425#comment-17278425
 ] 

Ahmed Hussein commented on YARN-10585:
--

The process to deal with those fixes is not defined in the community. 
Therefore, it is done by personal styling and preference.
 My point regarding the difference between reverting Vs filing-new-jira:
 * Yetus analyses the code based on the diff. This means that splitting the PR 
into two phases implies that the UTs and the code analysis have not been done 
on the whole changes together. These are couple of 2 sample examples for such 
cases:
 ** Take YARN-10352 which were committed with two findbugs errors. Both errors 
were lost because the report expired. The followup Jira YARN-10611 that was 
supposed to fix an import, shows only one findbugs report.
 ** Another example: if the follow-up Jira does not touch UT files, then Yetus 
won't trigger the tests cases. If the follow-up fixes break the unit tests, 
Yetus won't detect that leading to the merge of the broken code.
 * While I agree that findbugs/checkstyles reports have a lot of 
false-positives, they occasionally point out to bugs. This was the case with 
YARN-10352 which breaks the Hadoop dependencies.
 * In the last couple of weeks, there were at least 3 code merges with Yetus 
errors, with the first one being breaking the dependencies of Guava: 1) 
YARN-10352 - YARN-10611 ,  2) YARN-10574  -YARN-10506 , 3) YARN-10585-  
YARN-10612 .

> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278420#comment-17278420
 ] 

Hadoop QA commented on YARN-10612:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 58s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
49s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/584/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html{color}
 | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  8s{color} | {color:green}{col

[jira] [Commented] (YARN-10607) User environment is unable to prepend PATH when mapreduce.admin.user.env also sets PATH

2021-02-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278412#comment-17278412
 ] 

Hadoop QA commented on YARN-10607:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
26s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
56s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
43s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
5s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 26s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
31s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
34s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
29s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
29s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
0s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
0s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
29s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  

[jira] [Commented] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278384#comment-17278384
 ] 

Ahmed Hussein commented on YARN-10612:
--

I am ok with submitting the fix as a separate Jira as mentioned in YARN-10585

> Fix find bugs issue introduced in YARN-10585
> 
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10611) Fix that shaded should be used for google guava imports in YARN-10352.

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278383#comment-17278383
 ] 

Ahmed Hussein commented on YARN-10611:
--

Thanks [~zhuqi]!
Can you please fix the windbags error and confirm whether 
{{TestDelegationTokenRenewer}} failure is related to the changes?

> Fix that shaded should be used for google guava imports in YARN-10352.
> --
>
> Key: YARN-10611
> URL: https://issues.apache.org/jira/browse/YARN-10611
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10611.001.patch
>
>
> Fix that shaded should be used for google guava imports in YARN-10352.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-10352) Skip schedule on not heartbeated nodes in Multi Node Placement

2021-02-03 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated YARN-10352:
-
Comment: was deleted

(was: The problem that at any point we have more than one commit for each main 
Jura-ticket.
This makes it hard to go between revisions without breaking the build.

I suggest that the fixes are amended to the original commit and close 
YARN-10611.
Like revert and recommit a patch that does not generate Yetus.

Please make sure that the patch passes Yetus before merging.


)

> Skip schedule on not heartbeated nodes in Multi Node Placement
> --
>
> Key: YARN-10352
> URL: https://issues.apache.org/jira/browse/YARN-10352
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: capacityscheduler, multi-node-placement
> Fix For: 3.4.0
>
> Attachments: YARN-10352-001.patch, YARN-10352-002.patch, 
> YARN-10352-003.patch, YARN-10352-004.patch, YARN-10352-005.patch, 
> YARN-10352-006.patch, YARN-10352-007.patch, YARN-10352-008.patch, 
> YARN-10352-010.patch, YARN-10352.009.patch
>
>
> When Node Recovery is Enabled, Stopping a NM won't unregister to RM. So RM 
> Active Nodes will be still having those stopped nodes until NM Liveliness 
> Monitor Expires after configured timeout 
> (yarn.nm.liveness-monitor.expiry-interval-ms = 10 mins). During this 10mins, 
> Multi Node Placement assigns the containers on those nodes. They need to 
> exclude the nodes which has not heartbeated for configured heartbeat interval 
> (yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000ms) similar to 
> Asynchronous Capacity Scheduler Threads. 
> (CapacityScheduler#shouldSkipNodeSchedule)
> *Repro:*
> 1. Enable Multi Node Placement 
> (yarn.scheduler.capacity.multi-node-placement-enabled) + Node Recovery 
> Enabled  (yarn.node.recovery.enabled)
> 2. Have only one NM running say worker0
> 3. Stop worker0 and start any other NM say worker1
> 4. Submit a sleep job. The containers will timeout as assigned to stopped NM 
> worker0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-10352) Skip schedule on not heartbeated nodes in Multi Node Placement

2021-02-03 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein resolved YARN-10352.
--
Resolution: Fixed

> Skip schedule on not heartbeated nodes in Multi Node Placement
> --
>
> Key: YARN-10352
> URL: https://issues.apache.org/jira/browse/YARN-10352
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: capacityscheduler, multi-node-placement
> Fix For: 3.4.0
>
> Attachments: YARN-10352-001.patch, YARN-10352-002.patch, 
> YARN-10352-003.patch, YARN-10352-004.patch, YARN-10352-005.patch, 
> YARN-10352-006.patch, YARN-10352-007.patch, YARN-10352-008.patch, 
> YARN-10352-010.patch, YARN-10352.009.patch
>
>
> When Node Recovery is Enabled, Stopping a NM won't unregister to RM. So RM 
> Active Nodes will be still having those stopped nodes until NM Liveliness 
> Monitor Expires after configured timeout 
> (yarn.nm.liveness-monitor.expiry-interval-ms = 10 mins). During this 10mins, 
> Multi Node Placement assigns the containers on those nodes. They need to 
> exclude the nodes which has not heartbeated for configured heartbeat interval 
> (yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000ms) similar to 
> Asynchronous Capacity Scheduler Threads. 
> (CapacityScheduler#shouldSkipNodeSchedule)
> *Repro:*
> 1. Enable Multi Node Placement 
> (yarn.scheduler.capacity.multi-node-placement-enabled) + Node Recovery 
> Enabled  (yarn.node.recovery.enabled)
> 2. Have only one NM running say worker0
> 3. Stop worker0 and start any other NM say worker1
> 4. Submit a sleep job. The containers will timeout as assigned to stopped NM 
> worker0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278381#comment-17278381
 ] 

Ahmed Hussein commented on YARN-10585:
--

Thank you [~shuzirra] and [~snemeth] for the clarification.
[~snemeth] Sorry that I sounded negative and I did not word my comment the best 
way. I did not mean to comment on the quality of the work. What I meant was 
that the credibility of the process will diminish when it becomes a habit. I am 
confident you have verified the patch and the UTs.
I believe you have a good point to keep this Jira as resolved while fixing the 
issue in YARN-10612. Apologies for reopening this Jira.
 

> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein resolved YARN-10585.
--
Resolution: Fixed

> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-10601) The Yarn client should use the UGI who created the Yarn client for obtaining a delegation token for the remote log dir

2021-02-03 Thread Daniel Fritsi (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Fritsi resolved YARN-10601.
--
Resolution: Invalid

See my previous comment

> The Yarn client should use the UGI who created the Yarn client for obtaining 
> a delegation token for the remote log dir
> --
>
> Key: YARN-10601
> URL: https://issues.apache.org/jira/browse/YARN-10601
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Daniel Fritsi
>Priority: Critical
>
> It seems there was a bug introduced in YARN-10333 in this section of 
> *{color:#0747A6}{{addLogAggregationDelegationToken}}{color}*:
> {code:java}
> Path remoteRootLogDir = fileController.getRemoteRootLogDir();
> FileSystem fs = remoteRootLogDir.getFileSystem(conf);
> final org.apache.hadoop.security.token.Token[] finalTokens =
> fs.addDelegationTokens(masterPrincipal, credentials);
> {code}
> *{color:#0747A6}{{remoteRootLogDir.getFileSystem}}{color}* simply does this:
> {code:java}
> public FileSystem getFileSystem(Configuration conf) throws IOException {
>   return FileSystem.get(this.toUri(), conf);
> }
> {code}
> As far as I know it's customary to create a YarnClient instance via 
> *{color:#0747A6}{{YarnClient.createYarnClient()}}{color}* in a 
> UserGroupInformation.doAs block if you would like to use it with a different 
> user then the current one. E.g.:
> {code:java}
> YarnClient yarnClient = ugi.doAs(new PrivilegedExceptionAction() {
> @Override
> public YarnClient run() throws Exception {
> YarnClient yarnClient = YarnClient.createYarnClient();
> yarnClient.init(conf);
> yarnClient.start();
> return yarnClient;
> }
> });
> {code}
> If this statement is correct then I think YarnClient should save the 
> *{color:#0747A6}{{UserGroupInformation.getCurrentUser()}}{color}* when the 
> YarnClient is being created and the 
> *{color:#0747A6}{{remoteRootLogDir.getFileSystem(conf)}}{color}* call should 
> be made inside an ugi.doAs block with that saved user.
> A more concrete example:
> {code:java}
> public YarnClient createYarnClient(UserGroupInformation ugi, Configuration 
> conf) throws Exception {
> return ugi.doAs((PrivilegedExceptionAction) () -> {
> // Her I am the submitterUser (see below)
> YarnClient yarnClient = YarnClient.createYarnClient();
> yarnClient.init(conf);
> yarnClient.start();
> return yarnClient;
> });
> }
> public void run() {
> // Here I am the serviceUser
> // ...
> Configuration conf = ...
> // ...
> UserGroupInformation ugi = getSubmitterUser();
> // ...
> YarnClient yarnClient = createYarnClient(ugi);
> // ...
> ApplicationSubmissionContext context = ...
> // ...
> yarnClient.submitApplication(context);
> }
> {code}
> As you can see *{color:#0747A6}{{submitApplication}}{color}* is not invoked 
> inside an ugi.doAs block and submitApplication is the one who will eventually 
> invoke *{color:#0747A6}{{addLogAggregationDelegationToken}}{color}*. That's 
> why we need to save the UGI during the YarnClient creation and create the 
> FileSystem instance inside an ugi.doAs with that saved user. Otherwise Yarn 
> will try to get a delegation token with an incorrect user (serviceUser) 
> instead of the submitterUser.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10601) The Yarn client should use the UGI who created the Yarn client for obtaining a delegation token for the remote log dir

2021-02-03 Thread Daniel Fritsi (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278376#comment-17278376
 ] 

Daniel Fritsi commented on YARN-10601:
--

Yeah, we tested and if we put submitApplication into a doAs block all Oozie 
unit and system-tests still pass, so for now we'll choose that as a way 
forward. Let me close this ticket. If anyone thinks otherwise they can reopen 
it. ;)

> The Yarn client should use the UGI who created the Yarn client for obtaining 
> a delegation token for the remote log dir
> --
>
> Key: YARN-10601
> URL: https://issues.apache.org/jira/browse/YARN-10601
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Daniel Fritsi
>Priority: Critical
>
> It seems there was a bug introduced in YARN-10333 in this section of 
> *{color:#0747A6}{{addLogAggregationDelegationToken}}{color}*:
> {code:java}
> Path remoteRootLogDir = fileController.getRemoteRootLogDir();
> FileSystem fs = remoteRootLogDir.getFileSystem(conf);
> final org.apache.hadoop.security.token.Token[] finalTokens =
> fs.addDelegationTokens(masterPrincipal, credentials);
> {code}
> *{color:#0747A6}{{remoteRootLogDir.getFileSystem}}{color}* simply does this:
> {code:java}
> public FileSystem getFileSystem(Configuration conf) throws IOException {
>   return FileSystem.get(this.toUri(), conf);
> }
> {code}
> As far as I know it's customary to create a YarnClient instance via 
> *{color:#0747A6}{{YarnClient.createYarnClient()}}{color}* in a 
> UserGroupInformation.doAs block if you would like to use it with a different 
> user then the current one. E.g.:
> {code:java}
> YarnClient yarnClient = ugi.doAs(new PrivilegedExceptionAction() {
> @Override
> public YarnClient run() throws Exception {
> YarnClient yarnClient = YarnClient.createYarnClient();
> yarnClient.init(conf);
> yarnClient.start();
> return yarnClient;
> }
> });
> {code}
> If this statement is correct then I think YarnClient should save the 
> *{color:#0747A6}{{UserGroupInformation.getCurrentUser()}}{color}* when the 
> YarnClient is being created and the 
> *{color:#0747A6}{{remoteRootLogDir.getFileSystem(conf)}}{color}* call should 
> be made inside an ugi.doAs block with that saved user.
> A more concrete example:
> {code:java}
> public YarnClient createYarnClient(UserGroupInformation ugi, Configuration 
> conf) throws Exception {
> return ugi.doAs((PrivilegedExceptionAction) () -> {
> // Her I am the submitterUser (see below)
> YarnClient yarnClient = YarnClient.createYarnClient();
> yarnClient.init(conf);
> yarnClient.start();
> return yarnClient;
> });
> }
> public void run() {
> // Here I am the serviceUser
> // ...
> Configuration conf = ...
> // ...
> UserGroupInformation ugi = getSubmitterUser();
> // ...
> YarnClient yarnClient = createYarnClient(ugi);
> // ...
> ApplicationSubmissionContext context = ...
> // ...
> yarnClient.submitApplication(context);
> }
> {code}
> As you can see *{color:#0747A6}{{submitApplication}}{color}* is not invoked 
> inside an ugi.doAs block and submitApplication is the one who will eventually 
> invoke *{color:#0747A6}{{addLogAggregationDelegationToken}}{color}*. That's 
> why we need to save the UGI during the YarnClient creation and create the 
> FileSystem instance inside an ugi.doAs with that saved user. Otherwise Yarn 
> will try to get a delegation token with an incorrect user (serviceUser) 
> instead of the submitterUser.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Gergely Pollak (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278360#comment-17278360
 ] 

Gergely Pollak commented on YARN-10585:
---

[~ahussein] thank you for your suggestions.
{code:java}
For future code mergse and commits, please make sure that the patch/PR does not 
generate Yetus errors before merging.
{code}
While I'm really sorry for the oversight, mistakes unfortunately do happen, as 
soon we realized it we opened an other Jira to fix it. Please let's not argue 
about what are the "proper way to fix things" but rather let's focus on 
actually fixing the thing.
{code:java}
It is not scalable to have several Jiras filed just to fix checkstyle, and 
findbugs.
{code}
It has nothing to do with scalability, also we are not planning to make it a 
habit, but I think it's more important to get this resolved than to worry about 
one extra JIRA.

> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10607) User environment is unable to prepend PATH when mapreduce.admin.user.env also sets PATH

2021-02-03 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278356#comment-17278356
 ] 

Eric Badger commented on YARN-10607:


Patch 001 adds a new config parameter called {{yarn.nodemanager.force.path}}. 
The content of this config parameter will be force prepended to the PATH of all 
containers. 

Also not that it is imperative to use {{PATH}} instead of $PATH if you want to 
expand the path of the container environment and not the path of the client

> User environment is unable to prepend PATH when mapreduce.admin.user.env also 
> sets PATH
> ---
>
> Key: YARN-10607
> URL: https://issues.apache.org/jira/browse/YARN-10607
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-10607.001.patch
>
>
> When using the tarball approach to ship relevant Hadoop jars to containers, 
> it is helpful to set {{mapreduce.admin.user.env}} to something like 
> {{PATH=./hadoop-tarball:\{\{PATH\}\}}} to make sure that all of the Hadoop 
> binaries are on the PATH. This way you can call {{hadoop}} instead of 
> {{./hadoop-tarball/hadoop}}. The intention here is to force prepend 
> {{./hadoop-tarball}} and then append the set {{PATH}} afterwards. But if a 
> user would like to override the appended portion of {{PATH}} in their 
> environment, they are unable to do so. This is because {{PATH}} ends up 
> getting parsed twice. Initially it is set via {{mapreduce.admin.user.env}} to 
> {{PATH=./hadoop-tarball:$SYS_PATH}}}. In this case {{SYS_PATH}} is what I'll 
> refer to as the normal system path. E.g. {{/usr/local/bin:/usr/bin}}, etc.
> After this, the user env parsing happens. For example, let's say the user 
> sets their {{PATH}} to {{PATH=.:$PATH}}. We have already parsed {{PATH}} from 
> the admin.user.env. Then we go to parse the user environment and find the 
> user also specified {{PATH}}. So {{$PATH}} ends up getting getting expanded 
> to {{./hadoop-tarball:$SYS_PATH}}, which leads to the user's {{PATH}} being 
> {{PATH=.:./hadoop-tarball:$SYS_PATH}}. We then append this to {{PATH}}, which 
> has already been set in the environment map via the admin.user.env. So we 
> finally end up with 
> {{PATH=./hadoop-tarball:$SYS_PATH:.:./hadoop-tarball:$SYS_PATH}}. 
> This normally isn't a huge deal, but if you want to ship a version of 
> python/perl/etc. that clashes with the one that is already there in 
> {{SYS_PATH}}, you will need to refer to it by its full path. Since in the 
> above example, {{.}} doesn't appear until after {{$SYS_PATH}}. This is a pain 
> and it should be possible to prepend its {{PATH}} to override the 
> system/container {{SYS_PATH}}, even when also forcefully prepending to 
> {{PATH}} with you hadoop tarball.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Gergely Pollak (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278354#comment-17278354
 ] 

Gergely Pollak commented on YARN-10612:
---

[~ahussein] I don't see how it will make anything better if we reopen an other 
jira. Currently we have the patch which fails the tests in the trunk, so to 
remove it we either have to do a revert commit, then a recommit. But the 
commits between the actual commit and the revert will still fail the findbugs 
warning (which is a false positive I might add). So I don't see why would 2 
commits (revert + fix) would be better than just fixing this Jira and solve all 
in one go.

Anyways, until it settles, I'm setting it to patch available to let the jenkins 
run the findbugs again.

> Fix find bugs issue introduced in YARN-10585
> 
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Gergely Pollak (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Pollak reassigned YARN-10612:
-

Assignee: Gergely Pollak

> Fix find bugs issue introduced in YARN-10585
> 
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10613) Config to allow Intra-queue preemption to enable/disable conservativeDRF

2021-02-03 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278349#comment-17278349
 ] 

Jim Brennan commented on YARN-10613:


[~epayne] any reason we shouldn't add a property for inter-queue-preemption as 
well, so that both are configurable?

> Config to allow Intra-queue preemption to  enable/disable conservativeDRF
> -
>
> Key: YARN-10613
> URL: https://issues.apache.org/jira/browse/YARN-10613
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, scheduler preemption
>Affects Versions: 3.3.0, 3.2.2, 3.1.4, 2.10.1
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Minor
>
> YARN-8292 added code that prevents CS intra-queue preemption from preempting 
> containers from an app unless all of the major resources used by the app are 
> greater than the user limit for that user.
> Ex:
> | Used | User Limit |
> | <58GB, 58> | <30GB, 300> |
> In this example, only used memory is above the user limit, not used vcores. 
> So, intra-queue preemption will not occur.
> YARN-8292 added the {{conservativeDRF}} flag to 
> {{CapacitySchedulerPreemptionUtils#tryPreemptContainerAndDeductResToObtain}}. 
> If {{conservativeDRF}} is false, containers will be preempted from apps in 
> the example state. If true, containers will not be preempted.
> This flag is hard-coded to false for Inter-queue (cross-queue) preemption and 
> true for intra-queue (in-queue) preemption.
> I propose that in some cases, we want intra-queue preemption to be more 
> aggressive and preempt in the example case. To accommodate that, I propose 
> the addition of the following config property:
> {code:xml}
>   
> 
> yarn.resourcemanager.monitor.capacity.preemption.intra-queue-preemption.conservative-drf
> true
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10607) User environment is unable to prepend PATH when mapreduce.admin.user.env also sets PATH

2021-02-03 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-10607:
---
Attachment: YARN-10607.001.patch

> User environment is unable to prepend PATH when mapreduce.admin.user.env also 
> sets PATH
> ---
>
> Key: YARN-10607
> URL: https://issues.apache.org/jira/browse/YARN-10607
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-10607.001.patch
>
>
> When using the tarball approach to ship relevant Hadoop jars to containers, 
> it is helpful to set {{mapreduce.admin.user.env}} to something like 
> {{PATH=./hadoop-tarball:\{\{PATH\}\}}} to make sure that all of the Hadoop 
> binaries are on the PATH. This way you can call {{hadoop}} instead of 
> {{./hadoop-tarball/hadoop}}. The intention here is to force prepend 
> {{./hadoop-tarball}} and then append the set {{PATH}} afterwards. But if a 
> user would like to override the appended portion of {{PATH}} in their 
> environment, they are unable to do so. This is because {{PATH}} ends up 
> getting parsed twice. Initially it is set via {{mapreduce.admin.user.env}} to 
> {{PATH=./hadoop-tarball:$SYS_PATH}}}. In this case {{SYS_PATH}} is what I'll 
> refer to as the normal system path. E.g. {{/usr/local/bin:/usr/bin}}, etc.
> After this, the user env parsing happens. For example, let's say the user 
> sets their {{PATH}} to {{PATH=.:$PATH}}. We have already parsed {{PATH}} from 
> the admin.user.env. Then we go to parse the user environment and find the 
> user also specified {{PATH}}. So {{$PATH}} ends up getting getting expanded 
> to {{./hadoop-tarball:$SYS_PATH}}, which leads to the user's {{PATH}} being 
> {{PATH=.:./hadoop-tarball:$SYS_PATH}}. We then append this to {{PATH}}, which 
> has already been set in the environment map via the admin.user.env. So we 
> finally end up with 
> {{PATH=./hadoop-tarball:$SYS_PATH:.:./hadoop-tarball:$SYS_PATH}}. 
> This normally isn't a huge deal, but if you want to ship a version of 
> python/perl/etc. that clashes with the one that is already there in 
> {{SYS_PATH}}, you will need to refer to it by its full path. Since in the 
> above example, {{.}} doesn't appear until after {{$SYS_PATH}}. This is a pain 
> and it should be possible to prepend its {{PATH}} to override the 
> system/container {{SYS_PATH}}, even when also forcefully prepending to 
> {{PATH}} with you hadoop tarball.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10613) Config to allow Intra-queue preemption to enable/disable conservativeDRF

2021-02-03 Thread Eric Payne (Jira)
Eric Payne created YARN-10613:
-

 Summary: Config to allow Intra-queue preemption to  enable/disable 
conservativeDRF
 Key: YARN-10613
 URL: https://issues.apache.org/jira/browse/YARN-10613
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacity scheduler, scheduler preemption
Affects Versions: 2.10.1, 3.1.4, 3.2.2, 3.3.0
Reporter: Eric Payne
Assignee: Eric Payne


YARN-8292 added code that prevents CS intra-queue preemption from preempting 
containers from an app unless all of the major resources used by the app are 
greater than the user limit for that user.

Ex:
| Used | User Limit |
| <58GB, 58> | <30GB, 300> |

In this example, only used memory is above the user limit, not used vcores. So, 
intra-queue preemption will not occur.

YARN-8292 added the {{conservativeDRF}} flag to 
{{CapacitySchedulerPreemptionUtils#tryPreemptContainerAndDeductResToObtain}}. 
If {{conservativeDRF}} is false, containers will be preempted from apps in the 
example state. If true, containers will not be preempted.

This flag is hard-coded to false for Inter-queue (cross-queue) preemption and 
true for intra-queue (in-queue) preemption.

I propose that in some cases, we want intra-queue preemption to be more 
aggressive and preempt in the example case. To accommodate that, I propose the 
addition of the following config property:
{code:xml}
  

yarn.resourcemanager.monitor.capacity.preemption.intra-queue-preemption.conservative-drf
true
  
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278337#comment-17278337
 ] 

Szilard Nemeth edited comment on YARN-10585 at 2/3/21, 8:10 PM:


Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modifies git's commit history and this is to be 
avoided on a repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an exceptional situation. I have 200+ added commits and I 
can't recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests [~shuzirra] 
introduced and the coverage was more than enough. Such an NPE would have 
surfaced during the UT execution as well.

[~sunil.gov...@gmail.com] Please chime in for the topic of how to fix this: in 
a follow-up or reopening this one, please share your thoughts about pros/cons.
Thanks



was (Author: snemeth):
Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modifies git's commit history and this is to be 
avoided on a repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an exceptional situation. I have 200+ added commits and I 
can't recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests [~shuzirra] 
introduced and the coverage was more than enough. Such an NPE would have 
surfaced during the UT execution as well.


> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.

[jira] [Comment Edited] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278337#comment-17278337
 ] 

Szilard Nemeth edited comment on YARN-10585 at 2/3/21, 8:10 PM:


Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modifies git's commit history and this is to be 
avoided on a repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an exceptional situation. I have 200+ added commits and I 
can't recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests [~shuzirra] 
introduced and the coverage was more than enough. Such an NPE would have 
surfaced during the UT execution as well.

[~sunilg] Please chime in for the topic of how to fix this: in a follow-up or 
reopening this one, please share your thoughts about pros/cons.
Thanks



was (Author: snemeth):
Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modifies git's commit history and this is to be 
avoided on a repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an exceptional situation. I have 200+ added commits and I 
can't recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests [~shuzirra] 
introduced and the coverage was more than enough. Such an NPE would have 
surfaced during the UT execution as well.

[~sunil.gov...@gmail.com] Please chime in for the topic of how to fix this: in 
a follow-up or reopening this one, please share your thoughts about pros/cons.
Thanks


> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> 

[jira] [Comment Edited] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278337#comment-17278337
 ] 

Szilard Nemeth edited comment on YARN-10585 at 2/3/21, 8:09 PM:


Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modifies git's commit history and this is to be 
avoided on a repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an exceptional situation. I have 200+ added commits and I 
can't recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests [~shuzirra] 
introduced and the coverage was more than enough. Such an NPE would have 
surfaced during the UT execution as well.



was (Author: snemeth):
Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modifies git's commit history and this is to be 
avoided on a repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an exceptional situation. I have 200+ added commits and I 
can't recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests Gergo introduced and 
the coverage was more than enough. Such an NPE would have surfaced during the 
UT execution as well.


> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands,

[jira] [Comment Edited] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278337#comment-17278337
 ] 

Szilard Nemeth edited comment on YARN-10585 at 2/3/21, 8:08 PM:


Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modifies git's commit history and this is to be 
avoided on a repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an exceptional situation. I have 200+ added commits and I 
can't recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests Gergo introduced and 
the coverage was more than enough. Such an NPE would have surfaced during the 
UT execution as well.



was (Author: snemeth):
Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modifies git's commit history and this is to be 
avoided on a repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an excecptional situation. I have 200+ commits and I can't 
recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests Gergo introduced and 
the coverage was more than enough. Such an NPE would have surfaced during the 
UT execution as well.


> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ya

[jira] [Comment Edited] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278337#comment-17278337
 ] 

Szilard Nemeth edited comment on YARN-10585 at 2/3/21, 8:08 PM:


Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modifies git's commit history and this is to be 
avoided on a repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an excecptional situation. I have 200+ commits and I can't 
recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests Gergo introduced and 
the coverage was more than enough. Such an NPE would have surfaced during the 
UT execution as well.



was (Author: snemeth):
Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modified git history and this is to be avoided on a 
repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an excecptional situation. I have 200+ commits and I can't 
recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests Gergo introduced and 
the coverage was more than enough. Such an NPE would have surfaced during the 
UT execution as well.


> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...

[jira] [Commented] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278337#comment-17278337
 ] 

Szilard Nemeth commented on YARN-10585:
---

Hi [~ahussein],

My thoughts:

1. Apologies for merging this one with the Findbugs issue.
I have been a committer since middle of 2019 and have been paying attention and 
have been striving for the best code quality and Yetus results, making sure the 
code meets the code quality standards we're expecting at Hadoop.
This one is an exceptional case that simply fell through the cracks.

2. About the UT failures: They are completely unrelated
- 
org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testRMRestartOnMissingAttempts[FAIR]:
 This is Fair scheduler related and the patch is not
- 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testRMRestartWithExpiredToken:
 This is a well known flakey.


3. I can see that [~shuzirra] already reported YARN-10612 and you also left a 
comment there.
I still don't understand how reopening this jira is a better approach than 
fixing it in a follow-up.
We will have one more commit on top of trunk nevertheless, as I would not 
revert this commit for the sake of a single findbugs warning.
You mentioned amending on the other jira. How did you mean that? I never 
amended any commit as it modified git history and this is to be avoided on a 
repository that is used by many many people.

4. About scalability: I generally agree with your comment but as said in bullet 
point 1, this is an excecptional situation. I have 200+ commits and I can't 
recall a case where I committed findbugs issues. So it's a bit of an 
overstatement that this will cause a flood of commits.

5. Credibility: I can agree that we need to strive for findbugs error free 
commits. However, I have carefully reviewed the unit tests Gergo introduced and 
the coverage was more than enough. Such an NPE would have surfaced during the 
UT execution as well.


> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278335#comment-17278335
 ] 

Szilard Nemeth commented on YARN-10612:
---

HI [~ahussein],
See my comment on the other jira (YARN-10585).

> Fix find bugs issue introduced in YARN-10585
> 
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10601) The Yarn client should use the UGI who created the Yarn client for obtaining a delegation token for the remote log dir

2021-02-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278327#comment-17278327
 ] 

Gézapeti edited comment on YARN-10601 at 2/3/21, 7:57 PM:
--

I think we haven't added the doAs block as it looked like the job submission 
used the tokens we've added into the job config and everything we've checked 
showed that the application is run as the user we've intended. We have missed 
similar issues due to our test setup unfortunately: OOZIE-3478


was (Author: gezapeti):
I think we haven't added the doAs block as it looked like the job submission 
used the token we've added into the job config and everything we've checked 
showed that the application is run as the user we've intended. We have missed 
similar issues due to our test setup unfortunately: OOZIE-3478

> The Yarn client should use the UGI who created the Yarn client for obtaining 
> a delegation token for the remote log dir
> --
>
> Key: YARN-10601
> URL: https://issues.apache.org/jira/browse/YARN-10601
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Daniel Fritsi
>Priority: Critical
>
> It seems there was a bug introduced in YARN-10333 in this section of 
> *{color:#0747A6}{{addLogAggregationDelegationToken}}{color}*:
> {code:java}
> Path remoteRootLogDir = fileController.getRemoteRootLogDir();
> FileSystem fs = remoteRootLogDir.getFileSystem(conf);
> final org.apache.hadoop.security.token.Token[] finalTokens =
> fs.addDelegationTokens(masterPrincipal, credentials);
> {code}
> *{color:#0747A6}{{remoteRootLogDir.getFileSystem}}{color}* simply does this:
> {code:java}
> public FileSystem getFileSystem(Configuration conf) throws IOException {
>   return FileSystem.get(this.toUri(), conf);
> }
> {code}
> As far as I know it's customary to create a YarnClient instance via 
> *{color:#0747A6}{{YarnClient.createYarnClient()}}{color}* in a 
> UserGroupInformation.doAs block if you would like to use it with a different 
> user then the current one. E.g.:
> {code:java}
> YarnClient yarnClient = ugi.doAs(new PrivilegedExceptionAction() {
> @Override
> public YarnClient run() throws Exception {
> YarnClient yarnClient = YarnClient.createYarnClient();
> yarnClient.init(conf);
> yarnClient.start();
> return yarnClient;
> }
> });
> {code}
> If this statement is correct then I think YarnClient should save the 
> *{color:#0747A6}{{UserGroupInformation.getCurrentUser()}}{color}* when the 
> YarnClient is being created and the 
> *{color:#0747A6}{{remoteRootLogDir.getFileSystem(conf)}}{color}* call should 
> be made inside an ugi.doAs block with that saved user.
> A more concrete example:
> {code:java}
> public YarnClient createYarnClient(UserGroupInformation ugi, Configuration 
> conf) throws Exception {
> return ugi.doAs((PrivilegedExceptionAction) () -> {
> // Her I am the submitterUser (see below)
> YarnClient yarnClient = YarnClient.createYarnClient();
> yarnClient.init(conf);
> yarnClient.start();
> return yarnClient;
> });
> }
> public void run() {
> // Here I am the serviceUser
> // ...
> Configuration conf = ...
> // ...
> UserGroupInformation ugi = getSubmitterUser();
> // ...
> YarnClient yarnClient = createYarnClient(ugi);
> // ...
> ApplicationSubmissionContext context = ...
> // ...
> yarnClient.submitApplication(context);
> }
> {code}
> As you can see *{color:#0747A6}{{submitApplication}}{color}* is not invoked 
> inside an ugi.doAs block and submitApplication is the one who will eventually 
> invoke *{color:#0747A6}{{addLogAggregationDelegationToken}}{color}*. That's 
> why we need to save the UGI during the YarnClient creation and create the 
> FileSystem instance inside an ugi.doAs with that saved user. Otherwise Yarn 
> will try to get a delegation token with an incorrect user (serviceUser) 
> instead of the submitterUser.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10601) The Yarn client should use the UGI who created the Yarn client for obtaining a delegation token for the remote log dir

2021-02-03 Thread Jira


[ 
https://issues.apache.org/jira/browse/YARN-10601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278327#comment-17278327
 ] 

Gézapeti commented on YARN-10601:
-

I think we haven't added the doAs block as it looked like the job submission 
used the token we've added into the job config and everything we've checked 
showed that the application is run as the user we've intended. We have missed 
similar issues due to our test setup unfortunately: OOZIE-3478

> The Yarn client should use the UGI who created the Yarn client for obtaining 
> a delegation token for the remote log dir
> --
>
> Key: YARN-10601
> URL: https://issues.apache.org/jira/browse/YARN-10601
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Daniel Fritsi
>Priority: Critical
>
> It seems there was a bug introduced in YARN-10333 in this section of 
> *{color:#0747A6}{{addLogAggregationDelegationToken}}{color}*:
> {code:java}
> Path remoteRootLogDir = fileController.getRemoteRootLogDir();
> FileSystem fs = remoteRootLogDir.getFileSystem(conf);
> final org.apache.hadoop.security.token.Token[] finalTokens =
> fs.addDelegationTokens(masterPrincipal, credentials);
> {code}
> *{color:#0747A6}{{remoteRootLogDir.getFileSystem}}{color}* simply does this:
> {code:java}
> public FileSystem getFileSystem(Configuration conf) throws IOException {
>   return FileSystem.get(this.toUri(), conf);
> }
> {code}
> As far as I know it's customary to create a YarnClient instance via 
> *{color:#0747A6}{{YarnClient.createYarnClient()}}{color}* in a 
> UserGroupInformation.doAs block if you would like to use it with a different 
> user then the current one. E.g.:
> {code:java}
> YarnClient yarnClient = ugi.doAs(new PrivilegedExceptionAction() {
> @Override
> public YarnClient run() throws Exception {
> YarnClient yarnClient = YarnClient.createYarnClient();
> yarnClient.init(conf);
> yarnClient.start();
> return yarnClient;
> }
> });
> {code}
> If this statement is correct then I think YarnClient should save the 
> *{color:#0747A6}{{UserGroupInformation.getCurrentUser()}}{color}* when the 
> YarnClient is being created and the 
> *{color:#0747A6}{{remoteRootLogDir.getFileSystem(conf)}}{color}* call should 
> be made inside an ugi.doAs block with that saved user.
> A more concrete example:
> {code:java}
> public YarnClient createYarnClient(UserGroupInformation ugi, Configuration 
> conf) throws Exception {
> return ugi.doAs((PrivilegedExceptionAction) () -> {
> // Her I am the submitterUser (see below)
> YarnClient yarnClient = YarnClient.createYarnClient();
> yarnClient.init(conf);
> yarnClient.start();
> return yarnClient;
> });
> }
> public void run() {
> // Here I am the serviceUser
> // ...
> Configuration conf = ...
> // ...
> UserGroupInformation ugi = getSubmitterUser();
> // ...
> YarnClient yarnClient = createYarnClient(ugi);
> // ...
> ApplicationSubmissionContext context = ...
> // ...
> yarnClient.submitApplication(context);
> }
> {code}
> As you can see *{color:#0747A6}{{submitApplication}}{color}* is not invoked 
> inside an ugi.doAs block and submitApplication is the one who will eventually 
> invoke *{color:#0747A6}{{addLogAggregationDelegationToken}}{color}*. That's 
> why we need to save the UGI during the YarnClient creation and create the 
> FileSystem instance inside an ugi.doAs with that saved user. Otherwise Yarn 
> will try to get a delegation token with an incorrect user (serviceUser) 
> instead of the submitterUser.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10611) Fix that shaded should be used for google guava imports in YARN-10352.

2021-02-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278305#comment-17278305
 ] 

Hadoop QA commented on YARN-10611:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
19s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
57s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 54s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
56s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/582/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html{color}
 | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 56s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {c

[jira] [Reopened] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reopened YARN-10585:
--

> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10585) Create a class which can convert from legacy mapping rule format to the new JSON format

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278258#comment-17278258
 ] 

Ahmed Hussein commented on YARN-10585:
--

Thanks [~shuzirra] and [~snemeth] for the contribution.
I am reopening this jira as it was merged with Yetus failures.

For future code mergse and commits, please make sure that the patch/PR does not 
generate Yetus errors before merging.
It is not scalable to have several Jiras filed just to fix checkstyle, and 
findbugs.
In addition to the fact that this raises doubts on the patch credibility 
overall; eventually, this causes a flood of commits and difficulty reverting 
commits leading to unstable code repository.

> Create a class which can convert from legacy mapping rule format to the new 
> JSON format
> ---
>
> Key: YARN-10585
> URL: https://issues.apache.org/jira/browse/YARN-10585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Assignee: Gergely Pollak
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: YARN-10585.001.patch, YARN-10585.002.patch, 
> YARN-10585.003.patch
>
>
> To make transition easier we need to create tooling to support the migration 
> effort. The first step is to create a class which can migrate from legacy to 
> the new JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278251#comment-17278251
 ] 

Ahmed Hussein commented on YARN-10612:
--

Hey [~shuzirra], can you close this jira and address the findbugs and the 
checkstyle errors generated by the code change in the same Jira YARN-10585?
I understand that this is more work to amend changes to the merged code, but 
the merge should have not gone through with errors in the Yetus report.

It is inconvenient for developers to navigate through Jiras and code revisions 
when there are such dependencies between commits.
At any point rolling back a feature would require building a chain of multiple 
commits that consist a single ticket.

> Fix find bugs issue introduced in YARN-10585
> 
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10352) Skip schedule on not heartbeated nodes in Multi Node Placement

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278244#comment-17278244
 ] 

Ahmed Hussein edited comment on YARN-10352 at 2/3/21, 5:39 PM:
---

The problem that at any point we have more than one commit for each main 
Jura-ticket.
This makes it hard to go between revisions without breaking the build.

I suggest that the fixes are amended to the original commit and close 
YARN-10611.
Like revert and recommit a patch that does not generate Yetus.

Please make sure that the patch passes Yetus before merging.





was (Author: ahussein):
The problem that at any point we have more than one commit for each main 
Jura-ticket.
This makes it hard to go between revisions without breaking the build.

I suggest that the fixes are amended to the original commit and close 
YARN-10611.
Like revert and recommit a patch that does not generate errors by Yetus.

Please make sure that the patch passes Yetus before merging.




> Skip schedule on not heartbeated nodes in Multi Node Placement
> --
>
> Key: YARN-10352
> URL: https://issues.apache.org/jira/browse/YARN-10352
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: capacityscheduler, multi-node-placement
> Fix For: 3.4.0
>
> Attachments: YARN-10352-001.patch, YARN-10352-002.patch, 
> YARN-10352-003.patch, YARN-10352-004.patch, YARN-10352-005.patch, 
> YARN-10352-006.patch, YARN-10352-007.patch, YARN-10352-008.patch, 
> YARN-10352-010.patch, YARN-10352.009.patch
>
>
> When Node Recovery is Enabled, Stopping a NM won't unregister to RM. So RM 
> Active Nodes will be still having those stopped nodes until NM Liveliness 
> Monitor Expires after configured timeout 
> (yarn.nm.liveness-monitor.expiry-interval-ms = 10 mins). During this 10mins, 
> Multi Node Placement assigns the containers on those nodes. They need to 
> exclude the nodes which has not heartbeated for configured heartbeat interval 
> (yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000ms) similar to 
> Asynchronous Capacity Scheduler Threads. 
> (CapacityScheduler#shouldSkipNodeSchedule)
> *Repro:*
> 1. Enable Multi Node Placement 
> (yarn.scheduler.capacity.multi-node-placement-enabled) + Node Recovery 
> Enabled  (yarn.node.recovery.enabled)
> 2. Have only one NM running say worker0
> 3. Stop worker0 and start any other NM say worker1
> 4. Submit a sleep job. The containers will timeout as assigned to stopped NM 
> worker0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10352) Skip schedule on not heartbeated nodes in Multi Node Placement

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278244#comment-17278244
 ] 

Ahmed Hussein commented on YARN-10352:
--

The problem that at any point we have more than one commit for each main 
Jura-ticket.
This makes it hard to go between revisions without breaking the build.

I suggest that the fixes are amended to the original commit and close 
YARN-10611.
Like revert and recommit a patch that does not generate errors by Yetus.

Please make sure that the patch passes Yetus before merging.




> Skip schedule on not heartbeated nodes in Multi Node Placement
> --
>
> Key: YARN-10352
> URL: https://issues.apache.org/jira/browse/YARN-10352
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: capacityscheduler, multi-node-placement
> Fix For: 3.4.0
>
> Attachments: YARN-10352-001.patch, YARN-10352-002.patch, 
> YARN-10352-003.patch, YARN-10352-004.patch, YARN-10352-005.patch, 
> YARN-10352-006.patch, YARN-10352-007.patch, YARN-10352-008.patch, 
> YARN-10352-010.patch, YARN-10352.009.patch
>
>
> When Node Recovery is Enabled, Stopping a NM won't unregister to RM. So RM 
> Active Nodes will be still having those stopped nodes until NM Liveliness 
> Monitor Expires after configured timeout 
> (yarn.nm.liveness-monitor.expiry-interval-ms = 10 mins). During this 10mins, 
> Multi Node Placement assigns the containers on those nodes. They need to 
> exclude the nodes which has not heartbeated for configured heartbeat interval 
> (yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000ms) similar to 
> Asynchronous Capacity Scheduler Threads. 
> (CapacityScheduler#shouldSkipNodeSchedule)
> *Repro:*
> 1. Enable Multi Node Placement 
> (yarn.scheduler.capacity.multi-node-placement-enabled) + Node Recovery 
> Enabled  (yarn.node.recovery.enabled)
> 2. Have only one NM running say worker0
> 3. Stop worker0 and start any other NM say worker1
> 4. Submit a sleep job. The containers will timeout as assigned to stopped NM 
> worker0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278226#comment-17278226
 ] 

Hadoop QA commented on YARN-10532:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 2 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
47s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  6s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
49s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/581/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html{color}
 | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 305 unchanged - 1 fixed = 305 total (was 306) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 37s{color} 

[jira] [Commented] (YARN-10352) Skip schedule on not heartbeated nodes in Multi Node Placement

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278217#comment-17278217
 ] 

Ahmed Hussein commented on YARN-10352:
--

Thanks [~zhuqi] for the prompt response.
Do you know what were the findbugs errors reported by Yetus on January 20th? It 
could be awesome to fix those along in YARN-10611. 

> Skip schedule on not heartbeated nodes in Multi Node Placement
> --
>
> Key: YARN-10352
> URL: https://issues.apache.org/jira/browse/YARN-10352
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: capacityscheduler, multi-node-placement
> Fix For: 3.4.0
>
> Attachments: YARN-10352-001.patch, YARN-10352-002.patch, 
> YARN-10352-003.patch, YARN-10352-004.patch, YARN-10352-005.patch, 
> YARN-10352-006.patch, YARN-10352-007.patch, YARN-10352-008.patch, 
> YARN-10352-010.patch, YARN-10352.009.patch
>
>
> When Node Recovery is Enabled, Stopping a NM won't unregister to RM. So RM 
> Active Nodes will be still having those stopped nodes until NM Liveliness 
> Monitor Expires after configured timeout 
> (yarn.nm.liveness-monitor.expiry-interval-ms = 10 mins). During this 10mins, 
> Multi Node Placement assigns the containers on those nodes. They need to 
> exclude the nodes which has not heartbeated for configured heartbeat interval 
> (yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000ms) similar to 
> Asynchronous Capacity Scheduler Threads. 
> (CapacityScheduler#shouldSkipNodeSchedule)
> *Repro:*
> 1. Enable Multi Node Placement 
> (yarn.scheduler.capacity.multi-node-placement-enabled) + Node Recovery 
> Enabled  (yarn.node.recovery.enabled)
> 2. Have only one NM running say worker0
> 3. Stop worker0 and start any other NM say worker1
> 4. Submit a sleep job. The containers will timeout as assigned to stopped NM 
> worker0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10352) Skip schedule on not heartbeated nodes in Multi Node Placement

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278134#comment-17278134
 ] 

Qi Zhu commented on YARN-10352:
---

[~ahussein]

Fixed it in YARN-10611.

Thanks.

> Skip schedule on not heartbeated nodes in Multi Node Placement
> --
>
> Key: YARN-10352
> URL: https://issues.apache.org/jira/browse/YARN-10352
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: capacityscheduler, multi-node-placement
> Fix For: 3.4.0
>
> Attachments: YARN-10352-001.patch, YARN-10352-002.patch, 
> YARN-10352-003.patch, YARN-10352-004.patch, YARN-10352-005.patch, 
> YARN-10352-006.patch, YARN-10352-007.patch, YARN-10352-008.patch, 
> YARN-10352-010.patch, YARN-10352.009.patch
>
>
> When Node Recovery is Enabled, Stopping a NM won't unregister to RM. So RM 
> Active Nodes will be still having those stopped nodes until NM Liveliness 
> Monitor Expires after configured timeout 
> (yarn.nm.liveness-monitor.expiry-interval-ms = 10 mins). During this 10mins, 
> Multi Node Placement assigns the containers on those nodes. They need to 
> exclude the nodes which has not heartbeated for configured heartbeat interval 
> (yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000ms) similar to 
> Asynchronous Capacity Scheduler Threads. 
> (CapacityScheduler#shouldSkipNodeSchedule)
> *Repro:*
> 1. Enable Multi Node Placement 
> (yarn.scheduler.capacity.multi-node-placement-enabled) + Node Recovery 
> Enabled  (yarn.node.recovery.enabled)
> 2. Have only one NM running say worker0
> 3. Stop worker0 and start any other NM say worker1
> 4. Submit a sleep job. The containers will timeout as assigned to stopped NM 
> worker0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10611) Fix that shaded should be used for google guava imports in YARN-10352.

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278132#comment-17278132
 ] 

Qi Zhu commented on YARN-10611:
---

cc [~ahussein]

Fixed the guava import in 
[TestCapacitySchedulerMultiNodes-L#28|https://github.com/apache/hadoop/commit/6fc26ad5392a2a61ace60b88ed931fed3859365d#diff-34d534eb66cd9af6d7c47a9f643d598b1ad4cef3453219457769e92fbd4a649dR28]
  here.

Thanks.

> Fix that shaded should be used for google guava imports in YARN-10352.
> --
>
> Key: YARN-10611
> URL: https://issues.apache.org/jira/browse/YARN-10611
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10611.001.patch
>
>
> Fix that shaded should be used for google guava imports in YARN-10352.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Gergely Pollak (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Pollak updated YARN-10612:
--
Attachment: YARN-10612.001.patch

> Fix find bugs issue introduced in YARN-10585
> 
>
> Key: YARN-10612
> URL: https://issues.apache.org/jira/browse/YARN-10612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gergely Pollak
>Priority: Major
> Attachments: YARN-10612.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10612) Fix find bugs issue introduced in YARN-10585

2021-02-03 Thread Gergely Pollak (Jira)
Gergely Pollak created YARN-10612:
-

 Summary: Fix find bugs issue introduced in YARN-10585
 Key: YARN-10612
 URL: https://issues.apache.org/jira/browse/YARN-10612
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gergely Pollak






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10611) Fix that shaded should be used for google guava imports in YARN-10352.

2021-02-03 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10611:
--
Description: Fix that shaded should be used for google guava imports in 
YARN-10352.  (was: Fix )

> Fix that shaded should be used for google guava imports in YARN-10352.
> --
>
> Key: YARN-10611
> URL: https://issues.apache.org/jira/browse/YARN-10611
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
>
> Fix that shaded should be used for google guava imports in YARN-10352.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10611) Fix that shaded should be used for google guava imports in YARN-10352.

2021-02-03 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10611:
--
Description: Fix 

> Fix that shaded should be used for google guava imports in YARN-10352.
> --
>
> Key: YARN-10611
> URL: https://issues.apache.org/jira/browse/YARN-10611
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
>
> Fix 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-10352) Skip schedule on not heartbeated nodes in Multi Node Placement

2021-02-03 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reopened YARN-10352:
--

> Skip schedule on not heartbeated nodes in Multi Node Placement
> --
>
> Key: YARN-10352
> URL: https://issues.apache.org/jira/browse/YARN-10352
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: capacityscheduler, multi-node-placement
> Fix For: 3.4.0
>
> Attachments: YARN-10352-001.patch, YARN-10352-002.patch, 
> YARN-10352-003.patch, YARN-10352-004.patch, YARN-10352-005.patch, 
> YARN-10352-006.patch, YARN-10352-007.patch, YARN-10352-008.patch, 
> YARN-10352-010.patch, YARN-10352.009.patch
>
>
> When Node Recovery is Enabled, Stopping a NM won't unregister to RM. So RM 
> Active Nodes will be still having those stopped nodes until NM Liveliness 
> Monitor Expires after configured timeout 
> (yarn.nm.liveness-monitor.expiry-interval-ms = 10 mins). During this 10mins, 
> Multi Node Placement assigns the containers on those nodes. They need to 
> exclude the nodes which has not heartbeated for configured heartbeat interval 
> (yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000ms) similar to 
> Asynchronous Capacity Scheduler Threads. 
> (CapacityScheduler#shouldSkipNodeSchedule)
> *Repro:*
> 1. Enable Multi Node Placement 
> (yarn.scheduler.capacity.multi-node-placement-enabled) + Node Recovery 
> Enabled  (yarn.node.recovery.enabled)
> 2. Have only one NM running say worker0
> 3. Stop worker0 and start any other NM say worker1
> 4. Submit a sleep job. The containers will timeout as assigned to stopped NM 
> worker0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-10611) Fix that shaded should be used for google guava imports in YARN-10352.

2021-02-03 Thread Qi Zhu (Jira)
Qi Zhu created YARN-10611:
-

 Summary: Fix that shaded should be used for google guava imports 
in YARN-10352.
 Key: YARN-10611
 URL: https://issues.apache.org/jira/browse/YARN-10611
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Qi Zhu
Assignee: Qi Zhu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10352) Skip schedule on not heartbeated nodes in Multi Node Placement

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278121#comment-17278121
 ] 

Qi Zhu commented on YARN-10352:
---

Thanks for [~ahussein] review.

I will help to fix this 

> Skip schedule on not heartbeated nodes in Multi Node Placement
> --
>
> Key: YARN-10352
> URL: https://issues.apache.org/jira/browse/YARN-10352
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: capacityscheduler, multi-node-placement
> Fix For: 3.4.0
>
> Attachments: YARN-10352-001.patch, YARN-10352-002.patch, 
> YARN-10352-003.patch, YARN-10352-004.patch, YARN-10352-005.patch, 
> YARN-10352-006.patch, YARN-10352-007.patch, YARN-10352-008.patch, 
> YARN-10352-010.patch, YARN-10352.009.patch
>
>
> When Node Recovery is Enabled, Stopping a NM won't unregister to RM. So RM 
> Active Nodes will be still having those stopped nodes until NM Liveliness 
> Monitor Expires after configured timeout 
> (yarn.nm.liveness-monitor.expiry-interval-ms = 10 mins). During this 10mins, 
> Multi Node Placement assigns the containers on those nodes. They need to 
> exclude the nodes which has not heartbeated for configured heartbeat interval 
> (yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000ms) similar to 
> Asynchronous Capacity Scheduler Threads. 
> (CapacityScheduler#shouldSkipNodeSchedule)
> *Repro:*
> 1. Enable Multi Node Placement 
> (yarn.scheduler.capacity.multi-node-placement-enabled) + Node Recovery 
> Enabled  (yarn.node.recovery.enabled)
> 2. Have only one NM running say worker0
> 3. Stop worker0 and start any other NM say worker1
> 4. Submit a sleep job. The containers will timeout as assigned to stopped NM 
> worker0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10352) Skip schedule on not heartbeated nodes in Multi Node Placement

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278108#comment-17278108
 ] 

Ahmed Hussein commented on YARN-10352:
--

[~bibinchundatt] and [~ztang]. The patch introduces a guava import.

Can you please submit a followup to this patch fixing the guava import in 
[TestCapacitySchedulerMultiNodes-L#28|https://github.com/apache/hadoop/commit/6fc26ad5392a2a61ace60b88ed931fed3859365d#diff-34d534eb66cd9af6d7c47a9f643d598b1ad4cef3453219457769e92fbd4a649dR28]
 ? 

> Skip schedule on not heartbeated nodes in Multi Node Placement
> --
>
> Key: YARN-10352
> URL: https://issues.apache.org/jira/browse/YARN-10352
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: capacityscheduler, multi-node-placement
> Fix For: 3.4.0
>
> Attachments: YARN-10352-001.patch, YARN-10352-002.patch, 
> YARN-10352-003.patch, YARN-10352-004.patch, YARN-10352-005.patch, 
> YARN-10352-006.patch, YARN-10352-007.patch, YARN-10352-008.patch, 
> YARN-10352-010.patch, YARN-10352.009.patch
>
>
> When Node Recovery is Enabled, Stopping a NM won't unregister to RM. So RM 
> Active Nodes will be still having those stopped nodes until NM Liveliness 
> Monitor Expires after configured timeout 
> (yarn.nm.liveness-monitor.expiry-interval-ms = 10 mins). During this 10mins, 
> Multi Node Placement assigns the containers on those nodes. They need to 
> exclude the nodes which has not heartbeated for configured heartbeat interval 
> (yarn.resourcemanager.nodemanagers.heartbeat-interval-ms=1000ms) similar to 
> Asynchronous Capacity Scheduler Threads. 
> (CapacityScheduler#shouldSkipNodeSchedule)
> *Repro:*
> 1. Enable Multi Node Placement 
> (yarn.scheduler.capacity.multi-node-placement-enabled) + Node Recovery 
> Enabled  (yarn.node.recovery.enabled)
> 2. Have only one NM running say worker0
> 3. Stop worker0 and start any other NM say worker1
> 4. Submit a sleep job. The containers will timeout as assigned to stopped NM 
> worker0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278079#comment-17278079
 ] 

Qi Zhu commented on YARN-10610:
---

 [~snemeth]  [~shuzirra]

The finding bug is not related to this change, and i think the check style 
warning should not change, just consistent to origin queueName field.

Could you help to review for merge?

Thanks.

 

> Add queuePath to restful api for CapacityScheduler consistent with 
> FairScheduler queuePath.
> ---
>
> Key: YARN-10610
> URL: https://issues.apache.org/jira/browse/YARN-10610
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10610.001.patch, YARN-10610.002.patch, 
> image-2021-02-03-13-47-13-516.png
>
>
> The cs only have queueName, but not full queuePath.
> !image-2021-02-03-13-47-13-516.png|width=631,height=356!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278068#comment-17278068
 ] 

Hadoop QA commented on YARN-10610:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
3s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 2 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
37s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 37s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
48s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/580/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html{color}
 | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/580/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color}
 | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 109 unchanged - 0 fixed = 111 total (was 109) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{colo

[jira] [Updated] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-03 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10532:
--
Attachment: YARN-10532.014.patch

> Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is 
> not being used
> 
>
> Key: YARN-10532
> URL: https://issues.apache.org/jira/browse/YARN-10532
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10532.001.patch, YARN-10532.002.patch, 
> YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, 
> YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, 
> YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, 
> YARN-10532.012.patch, YARN-10532.013.patch, YARN-10532.014.patch
>
>
> It's better if we can delete auto-created queues when they are not in use for 
> a period of time (like 5 mins). It will be helpful when we have a large 
> number of auto-created queues (e.g. from 500 users), but only a small subset 
> of queues are actively used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17278029#comment-17278029
 ] 

Hadoop QA commented on YARN-10532:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
24s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 2 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
49s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 53s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
51s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/579/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html{color}
 | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/579/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color}
 | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 7 new + 305 unchanged - 1 fixed = 312 total (was 306) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{colo

[jira] [Commented] (YARN-10552) Eliminate code duplication in SLSCapacityScheduler and SLSFairScheduler

2021-02-03 Thread Siddharth Ahuja (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277991#comment-17277991
 ] 

Siddharth Ahuja commented on YARN-10552:


Hey [~snemeth], thanks a lot for the de-duplication here! 

Few comments from my side:  

# SLSSchedulerCommons - Can we please explicitly assign a default value for the 
declared fields like metricsOn etc. and not rely on Java to assign one, just as 
a good programming style. 
# Class variables - metricsOn & schedulerMetrics could be marked as private in 
SLSSchedulerCommons, new getters should be defined that could be invoked within 
the individual scheduler classes instead of referring them directly from a 
separate object.
# The "Tracker" seems to be common to both schedulers as such we could move the 
declaration & initialization to the common SLSSchedulerCommons, implement 
getTracker() here that returns the tracker object and keep getTracker() in the 
individual schedulers (we have to, thanks to SchedulerWrapper) and just return 
the tracker by calling schedulerCommons.getTracker().
# //metrics off, //metrics on comments inside handle() in SLSSchedulerCommons 
don't seem to be adding much value so let's just remove them.
# appQueueMap was not present in SLSFairScheduler before (it was in 
SLSCapacityScheduler) however from 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/SLSFairScheduler.java#L163,
 it seems that the super class of the schedulers - 
https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java#L159
 has this already. As such, do we really need to define a new map as a common 
map at all in SLSSchedulerCommons or can we somehow reuse the super class's 
map? It might need some code updates though.
# In regards to the above point, considering SLSFairScheduler did not 
previously have any of the following code in handle() method:

{code}
AppAttemptRemovedSchedulerEvent appRemoveEvent =
(AppAttemptRemovedSchedulerEvent) schedulerEvent;
appQueueMap.remove(appRemoveEvent.getApplicationAttemptID());
  } else if (schedulerEvent.getType() ==
  SchedulerEventType.APP_ATTEMPT_ADDED
  && schedulerEvent instanceof AppAttemptAddedSchedulerEvent) {
AppAttemptAddedSchedulerEvent appAddEvent =
(AppAttemptAddedSchedulerEvent) schedulerEvent;
SchedulerApplication app =
(SchedulerApplication) 
scheduler.getSchedulerApplications().get(appAddEvent.getApplicationAttemptId()
.getApplicationId());
appQueueMap.put(appAddEvent.getApplicationAttemptId(), app.getQueue()
.getQueueName());
{code}

Do you think this was a bug that wasn't earlier identified?


> Eliminate code duplication in SLSCapacityScheduler and SLSFairScheduler
> ---
>
> Key: YARN-10552
> URL: https://issues.apache.org/jira/browse/YARN-10552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-10552.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277977#comment-17277977
 ] 

Qi Zhu commented on YARN-10610:
---

Thanks a lot for [~shuzirra] review.

Fix the TestRMWebServicesForCSWithPartitions test case in latest patch.

 

> Add queuePath to restful api for CapacityScheduler consistent with 
> FairScheduler queuePath.
> ---
>
> Key: YARN-10610
> URL: https://issues.apache.org/jira/browse/YARN-10610
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10610.001.patch, YARN-10610.002.patch, 
> image-2021-02-03-13-47-13-516.png
>
>
> The cs only have queueName, but not full queuePath.
> !image-2021-02-03-13-47-13-516.png|width=631,height=356!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10610:
--
Attachment: YARN-10610.002.patch

> Add queuePath to restful api for CapacityScheduler consistent with 
> FairScheduler queuePath.
> ---
>
> Key: YARN-10610
> URL: https://issues.apache.org/jira/browse/YARN-10610
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10610.001.patch, YARN-10610.002.patch, 
> image-2021-02-03-13-47-13-516.png
>
>
> The cs only have queueName, but not full queuePath.
> !image-2021-02-03-13-47-13-516.png|width=631,height=356!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Gergely Pollak (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277968#comment-17277968
 ] 

Gergely Pollak commented on YARN-10610:
---

[~zhuqi] thank  you for the patch! Nice change, when we introduced the leaf 
queue / full path change we intentionally did not change any external 
interfaces to make sure we don't break any tools relying on these interfaces, 
however this change is very clean, and only extends the response, so there 
should be no such issue with it!

LGTM +1 (Non-binding)

> Add queuePath to restful api for CapacityScheduler consistent with 
> FairScheduler queuePath.
> ---
>
> Key: YARN-10610
> URL: https://issues.apache.org/jira/browse/YARN-10610
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10610.001.patch, image-2021-02-03-13-47-13-516.png
>
>
> The cs only have queueName, but not full queuePath.
> !image-2021-02-03-13-47-13-516.png|width=631,height=356!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277949#comment-17277949
 ] 

Hadoop QA commented on YARN-10610:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
38s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 37s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
46s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/578/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html{color}
 | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/578/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color}
 | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 105 unchanged - 0 fixed = 107 total (was 105) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{colo

[jira] [Updated] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-03 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10532:
--
Attachment: YARN-10532.013.patch

> Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is 
> not being used
> 
>
> Key: YARN-10532
> URL: https://issues.apache.org/jira/browse/YARN-10532
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10532.001.patch, YARN-10532.002.patch, 
> YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, 
> YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, 
> YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, 
> YARN-10532.012.patch, YARN-10532.013.patch
>
>
> It's better if we can delete auto-created queues when they are not in use for 
> a period of time (like 5 mins). It will be helpful when we have a large 
> number of auto-created queues (e.g. from 500 users), but only a small subset 
> of queues are actively used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10548) Decouple AM runner logic from SLSRunner

2021-02-03 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277862#comment-17277862
 ] 

Andras Gyori commented on YARN-10548:
-

Thank you [~snemeth]. The patch does not apply, can you rebase it on latest 
trunk?

> Decouple AM runner logic from SLSRunner
> ---
>
> Key: YARN-10548
> URL: https://issues.apache.org/jira/browse/YARN-10548
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-10548.001.patch
>
>
> SLSRunner has too many responsibilities.
>  One of them is to parse the job details from the SLS input formats and 
> launch the AMs and task containers.
>  The AM runner logic could be decoupled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10547) Decouple job parsing logic from SLSRunner

2021-02-03 Thread Andras Gyori (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277859#comment-17277859
 ] 

Andras Gyori commented on YARN-10547:
-

Thank you [~snemeth] for the patch. As this is a refactor task, I have not done 
deep semantic analysis, also due to my lack of SLS exposure it would mean a 
greater ramp-up to gain some proficiency in the subject. However, I have 
collected some feedback nonetheless:
 * AMDefinition has a Builder, which is not a standard fluent API based 
Builder. Is this intentional?
 * AMDefinitionFactory should be final
 * There are numerous raw type scattered in different classes (this could be 
mitigated by saying Map if the type is unknown)
 ** AMDefinitionFactory
 ** AMDefinitionSLS
 ** TaskContainerDefinition

It looks much cleaner to me, good job on this. +1 if these concerns are 
addressed along with the checkstyle errors.

> Decouple job parsing logic from SLSRunner
> -
>
> Key: YARN-10547
> URL: https://issues.apache.org/jira/browse/YARN-10547
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Minor
> Attachments: YARN-10547.001.patch, YARN-10547.002.patch, 
> YARN-10547.003.patch
>
>
> SLSRunner has too many responsibilities.
> One of them is to parse the job details from the SLS input formats and launch 
> the AMs and task containers.
> As a first step, the job parser logic could be decoupled from this class.
> There are 3 types of inputs: 
> - SLS trace
> - Synth
> - Rumen
> Their job parsing method are: 
> - SLS trace: 
> https://github.com/apache/hadoop/blob/005b854f6bad66defafae0abf95dabc6c36ca8b1/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java#L479-L526
> - Synth: 
> https://github.com/apache/hadoop/blob/005b854f6bad66defafae0abf95dabc6c36ca8b1/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java#L722-L790
> - Rumen: 
> https://github.com/apache/hadoop/blob/005b854f6bad66defafae0abf95dabc6c36ca8b1/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java#L651-L716



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277852#comment-17277852
 ] 

Hadoop QA commented on YARN-10532:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
19s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 2 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
19s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 24s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
26s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
20s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/577/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html{color}
 | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 20s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-YARN-Build/577/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt{color}
 | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 305 unchanged - 1 fixed = 306 total (was 306) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{colo

[jira] [Comment Edited] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277704#comment-17277704
 ] 

Qi Zhu edited comment on YARN-10610 at 2/3/21, 9:29 AM:


1. In our product cluster, we want to migrate fs to cs, but the cs restful api 
don't have queue path consistent with fs.

2. Now, the cs support same name leaf queue, so this is also needed for full 
queuePath in rest api scheduler info.

I think we should support it in cs, submitted a patch for review, thanks.

cc [~wangda]  [~tangzhankun] [~shuzirra] [~snemeth] [~pbacsko]


was (Author: zhuqi):
1. In our product cluster, we want to migrate fs to cs, but the cs restful api 
don't have queue path consistent with fs.

2. Now, the cs support same name leaf queue, so this is also needed for full 
queuePath in rest api scheduler info.

I think we should support it in cs.

cc [~wangda]  [~tangzhankun] [~shuzirra] [~snemeth] [~pbacsko]

> Add queuePath to restful api for CapacityScheduler consistent with 
> FairScheduler queuePath.
> ---
>
> Key: YARN-10610
> URL: https://issues.apache.org/jira/browse/YARN-10610
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10610.001.patch, image-2021-02-03-13-47-13-516.png
>
>
> The cs only have queueName, but not full queuePath.
> !image-2021-02-03-13-47-13-516.png|width=631,height=356!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10610:
--
Issue Type: Improvement  (was: Bug)

> Add queuePath to restful api for CapacityScheduler consistent with 
> FairScheduler queuePath.
> ---
>
> Key: YARN-10610
> URL: https://issues.apache.org/jira/browse/YARN-10610
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10610.001.patch, image-2021-02-03-13-47-13-516.png
>
>
> The cs only have queueName, but not full queuePath.
> !image-2021-02-03-13-47-13-516.png|width=631,height=356!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-10610) Add queuePath to restful api for CapacityScheduler consistent with FairScheduler queuePath.

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277704#comment-17277704
 ] 

Qi Zhu edited comment on YARN-10610 at 2/3/21, 9:27 AM:


1. In our product cluster, we want to migrate fs to cs, but the cs restful api 
don't have queue path consistent with fs.

2. Now, the cs support same name leaf queue, so this is also needed for full 
queuePath in rest api scheduler info.

I think we should support it in cs.

cc [~wangda]  [~tangzhankun] [~shuzirra] [~snemeth] [~pbacsko]


was (Author: zhuqi):
1. In our product cluster, we want to migrate fs to cs, but the cs restful api 
don't have queue path consistent with fs.

2. Now, the cs support same name leaf queue, so this is also needed for full 
queuePath in rest api scheduler info.

I think we should support it in cs.

cc [~wangda]  [~shuzirra] [~snemeth] [~pbacsko]

> Add queuePath to restful api for CapacityScheduler consistent with 
> FairScheduler queuePath.
> ---
>
> Key: YARN-10610
> URL: https://issues.apache.org/jira/browse/YARN-10610
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10610.001.patch, image-2021-02-03-13-47-13-516.png
>
>
> The cs only have queueName, but not full queuePath.
> !image-2021-02-03-13-47-13-516.png|width=631,height=356!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-03 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17277789#comment-17277789
 ] 

Qi Zhu commented on YARN-10532:
---

Fixed the java doc, finding bugs, and check style in latest patch.

> Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is 
> not being used
> 
>
> Key: YARN-10532
> URL: https://issues.apache.org/jira/browse/YARN-10532
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10532.001.patch, YARN-10532.002.patch, 
> YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, 
> YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, 
> YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, 
> YARN-10532.012.patch
>
>
> It's better if we can delete auto-created queues when they are not in use for 
> a period of time (like 5 mins). It will be helpful when we have a large 
> number of auto-created queues (e.g. from 500 users), but only a small subset 
> of queues are actively used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-10532) Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is not being used

2021-02-03 Thread Qi Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qi Zhu updated YARN-10532:
--
Attachment: YARN-10532.012.patch

> Capacity Scheduler Auto Queue Creation: Allow auto delete queue when queue is 
> not being used
> 
>
> Key: YARN-10532
> URL: https://issues.apache.org/jira/browse/YARN-10532
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Qi Zhu
>Priority: Major
> Attachments: YARN-10532.001.patch, YARN-10532.002.patch, 
> YARN-10532.003.patch, YARN-10532.004.patch, YARN-10532.005.patch, 
> YARN-10532.006.patch, YARN-10532.007.patch, YARN-10532.008.patch, 
> YARN-10532.009.patch, YARN-10532.010.patch, YARN-10532.011.patch, 
> YARN-10532.012.patch
>
>
> It's better if we can delete auto-created queues when they are not in use for 
> a period of time (like 5 mins). It will be helpful when we have a large 
> number of auto-created queues (e.g. from 500 users), but only a small subset 
> of queues are actively used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org