[jira] [Commented] (YARN-7563) Invalid event: FINISH_APPLICATION at NEW may make some application level resource be not cleaned
[ https://issues.apache.org/jira/browse/YARN-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268347#comment-16268347 ] lujie commented on YARN-7563: - I just attach a patch that contains a unit test to show this bugs. I also try to fix it based on existing code, but i am not sure whether my solution is good. please check it and let me now how to fix it better. > Invalid event: FINISH_APPLICATION at NEW may make some application level > resource be not cleaned > - > > Key: YARN-7563 > URL: https://issues.apache.org/jira/browse/YARN-7563 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 2.6.0, 3.0.0-beta1 >Reporter: lujie > Attachments: YARN-7563.png, YARN-7563.txt > > > I send kill command to application, nodemanager log shows: > {code:java} > 2017-11-25 19:18:48,126 WARN > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: > couldn't find container container_1511608703018_0001_01_01 while > processing FINISH_CONTAINERS event > 2017-11-25 19:18:48,146 WARN > org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: > Can't handle this event at current state > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > FINISH_APPLICATION at NEW > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:627) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:75) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:1508) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:1501) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126) > at java.lang.Thread.run(Thread.java:745) > 2017-11-25 19:18:48,151 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl: > Application application_1511608703018_0001 transitioned from NEW to INITING > {code} > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7564) Cleanup to fix checkstyle issues of YARN-5881 branch
[ https://issues.apache.org/jira/browse/YARN-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7564: -- Summary: Cleanup to fix checkstyle issues of YARN-5881 branch (was: Fix checkstyle issues of YARN-5881 branch) > Cleanup to fix checkstyle issues of YARN-5881 branch > > > Key: YARN-7564 > URL: https://issues.apache.org/jira/browse/YARN-7564 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Sunil G >Assignee: Sunil G >Priority: Minor > Attachments: YARN-7564-YARN-5881.001.patch > > > fix jenkins issues. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7573) Gpu Information page could be empty for nodes without GPU
Sunil G created YARN-7573: - Summary: Gpu Information page could be empty for nodes without GPU Key: YARN-7573 URL: https://issues.apache.org/jira/browse/YARN-7573 Project: Hadoop YARN Issue Type: Sub-task Components: webapp, yarn-ui-v2 Reporter: Sunil G Assignee: Sunil G In new YARN UI, node page is not accessible if that node doesnt have any GPU. Also Under node page, when we click on "List of Containers/Applications", Gpu Information left nave is disappearing. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7473) Implement Framework and policy for capacity management of auto created queues
[ https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suma Shivaprasad updated YARN-7473: --- Attachment: YARN-7473.10.patch Thanks [~wangda] Attached patch wth all the comments addressed. > Implement Framework and policy for capacity management of auto created queues > -- > > Key: YARN-7473 > URL: https://issues.apache.org/jira/browse/YARN-7473 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad > Attachments: YARN-7473.1.patch, YARN-7473.10.patch, > YARN-7473.2.patch, YARN-7473.3.patch, YARN-7473.4.patch, YARN-7473.5.patch, > YARN-7473.6.patch, YARN-7473.7.patch, YARN-7473.8.patch, YARN-7473.9.patch > > > This jira mainly addresses the following > > 1.Support adding pluggable policies on parent queue for dynamically managing > capacity/state for leaf queues. > 2. Implement a default policy that manages capacity based on pending > applications and either grants guaranteed or zero capacity to queues based on > parent's available guaranteed capacity. > 3. Integrate with SchedulingEditPolicy framework to trigger this periodically > and signal scheduler to take necessary actions for capacity/queue management. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7499) Layout changes to Application details page
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268410#comment-16268410 ] ASF GitHub Bot commented on YARN-7499: -- Github user skmvasu closed the pull request at: https://github.com/apache/hadoop/pull/298 > Layout changes to Application details page > -- > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/298 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue
[ https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chuanjie.duan updated YARN-7562: Description: User algo submit a mapreduce job, console log said "root.algo is not a leaf queue exception". root.algo is a parent queue, it's meanless for me. Not sure why parent queue added before 3000 mb, 1 vcores 24000 mb, 8 vcores 4 1 fifo 300 4 mb, 10 vcores 20 mb, 60 vcores 300 4 mb, 10 vcores 20 mb, 60 vcores 20 fifo 4 was: User algo submit a mapreduce job, console log said "root.algo is not a leaf queue exception". root.algo is a parent queue, it's meanless for me. Not sure why parent queue added before 300 > queuePlacementPolicy should not match parent queue > -- > > Key: YARN-7562 > URL: https://issues.apache.org/jira/browse/YARN-7562 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Affects Versions: 2.7.1 >Reporter: chuanjie.duan > Attachments: YARN-7562.patch > > > User algo submit a mapreduce job, console log said "root.algo is not a leaf > queue exception". > root.algo is a parent queue, it's meanless for me. Not sure why parent queue > added before > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > 20 > fifo > 4 > > > > > > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7499) Layout changes to Application details page
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268416#comment-16268416 ] ASF GitHub Bot commented on YARN-7499: -- GitHub user skmvasu opened a pull request: https://github.com/apache/hadoop/pull/303 YARN-7499. Application page layout changes You can merge this pull request into a Git repository by running: $ git pull https://github.com/skmvasu/hadoop new_ia_changes Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/303.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #303 commit 5d25d0f73c1ca81c56d1253bc956ff5958f15dc4 Author: Vasu Date: 2017-11-28T09:07:52Z App Page IA Changes commit 8a06f89c05bad5f45d0afc9bbb3db3ae4ee27d6c Author: Vasu Date: 2017-11-28T08:49:50Z Fix reload on Application details page > Layout changes to Application details page > -- > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/298 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7499) Layout changes to Application details page
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268423#comment-16268423 ] Sunil G commented on YARN-7499: --- Thanks [~skmvasu] for fixing "refresh" button problem Kicking jenkins to check latest changes. > Layout changes to Application details page > -- > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/298 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7563) Invalid event: FINISH_APPLICATION at NEW may make some application level resource be not cleaned
[ https://issues.apache.org/jira/browse/YARN-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268424#comment-16268424 ] genericqa commented on YARN-7563: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s{color} | {color:blue} The patch file was not named according to hadoop's naming conventions. Please see https://wiki.apache.org/hadoop/HowToContribute for instructions. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 40s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 18s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 3 new + 58 unchanged - 0 fixed = 61 total (was 58) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 7 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 32s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7563 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899567/YARN-7563.txt | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 68f277de8b54 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 94bed50 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151
[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue
[ https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chuanjie.duan updated YARN-7562: Description: User algo submit a mapreduce job, console log said "root.algo is not a leaf queue exception". root.algo is a parent queue, it's meanless for me. Not sure why parent queue added before 3000 mb, 1 vcores 24000 mb, 8 vcores 4 1 fifo 3000 mb, 1 vcores 24000 mb, 8 vcores 4 1 fifo 300 4 mb, 10 vcores 20 mb, 60 vcores 300 4 mb, 10 vcores 20 mb, 60 vcores 20 fifo 4 was: User algo submit a mapreduce job, console log said "root.algo is not a leaf queue exception". root.algo is a parent queue, it's meanless for me. Not sure why parent queue added before 3000 mb, 1 vcores 24000 mb, 8 vcores 4 1 fifo 300 4 mb, 10 vcores 20 mb, 60 vcores 300 4 mb, 10 vcores 20 mb, 60 vcores 20 fifo 4 > queuePlacementPolicy should not match parent queue > -- > > Key: YARN-7562 > URL: https://issues.apache.org/jira/browse/YARN-7562 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Affects Versions: 2.7.1 >Reporter: chuanjie.duan > Attachments: YARN-7562.patch > > > User algo submit a mapreduce job, console log said "root.algo is not a leaf > queue exception". > root.algo is a parent queue, it's meanless for me. Not sure why parent queue > added before > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > 20 > fifo > 4 > > > > > > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7499) Layout changes to Application details page
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasudevan Skm updated YARN-7499: Description: Change the application page IA PR: https://github.com/apache/hadoop/pull/303 was: Change the application page IA PR: https://github.com/apache/hadoop/pull/302 > Layout changes to Application details page > -- > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/303 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7499) Layout changes to Application details page
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasudevan Skm updated YARN-7499: Description: Change the application page IA PR: https://github.com/apache/hadoop/pull/302 was: Change the application page IA PR: https://github.com/apache/hadoop/pull/298 > Layout changes to Application details page > -- > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/302 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue
[ https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chuanjie.duan updated YARN-7562: Description: User algo submit a mapreduce job, console log said "root.algo is not a leaf queue exception". root.algo is a parent queue, it's meanless for me. Not sure why parent queue added before 3000 mb, 1 vcores 24000 mb, 8 vcores 4 1 fifo 3000 mb, 1 vcores 24000 mb, 8 vcores 4 1 fifo 300 4 mb, 10 vcores 20 mb, 60 vcores 300 4 mb, 10 vcores 10 mb, 30 vcores 20 fifo 4 300 4 mb, 10 vcores 10 mb, 30 vcores 20 fifo 4 was: User algo submit a mapreduce job, console log said "root.algo is not a leaf queue exception". root.algo is a parent queue, it's meanless for me. Not sure why parent queue added before 3000 mb, 1 vcores 24000 mb, 8 vcores 4 1 fifo 3000 mb, 1 vcores 24000 mb, 8 vcores 4 1 fifo 300 4 mb, 10 vcores 20 mb, 60 vcores 300 4 mb, 10 vcores 20 mb, 60 vcores 20 fifo 4 > queuePlacementPolicy should not match parent queue > -- > > Key: YARN-7562 > URL: https://issues.apache.org/jira/browse/YARN-7562 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Affects Versions: 2.7.1 >Reporter: chuanjie.duan > Attachments: YARN-7562.patch > > > User algo submit a mapreduce job, console log said "root.algo is not a leaf > queue exception". > root.algo is a parent queue, it's meanless for me. Not sure why parent queue > added before > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > > > > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7574) Add support for Node Labels on Auto Created Leaf Queue Template
Suma Shivaprasad created YARN-7574: -- Summary: Add support for Node Labels on Auto Created Leaf Queue Template Key: YARN-7574 URL: https://issues.apache.org/jira/browse/YARN-7574 Project: Hadoop YARN Issue Type: Sub-task Reporter: Suma Shivaprasad Assignee: Suma Shivaprasad YARN-7473 adds support for auto created leaf queues to inherit node labels capacities from parent queues. Howebver there is no support for leaf queue template to allow different configured capacities for different node labels. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7499) Layout changes to Application details page
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268506#comment-16268506 ] genericqa commented on YARN-7499: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 37m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 23m 23s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7499 | | GITHUB PR | https://github.com/apache/hadoop/pull/303 | | Optional Tests | asflicense shadedclient | | uname | Linux 17ab3817c023 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 94bed50 | | maven | version: Apache Maven 3.3.9 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/18685/artifact/out/whitespace-eol.txt | | Max. process+thread count | 292 (vs. ulimit of 5000) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18685/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Layout changes to Application details page > -- > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/303 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7499) Layout changes to Application details page
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268535#comment-16268535 ] genericqa commented on YARN-7499: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 0m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 106 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 3s{color} | {color:red} The patch 1152 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 0m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue} 0m 9s{color} | {color:blue} ASF License check generated no output? {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7499 | | GITHUB PR | https://github.com/apache/hadoop/pull/303 | | Optional Tests | asflicense shadedclient | | uname | Linux 719d36177c43 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 94bed50 | | maven | version: Apache Maven 3.3.9 | | mvninstall | https://builds.apache.org/job/PreCommit-YARN-Build/18687/artifact/out/branch-mvninstall-root.txt | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/18687/artifact/out/whitespace-eol.txt | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/18687/artifact/out/whitespace-tabs.txt | | Max. process+thread count | 13 (vs. ulimit of 5000) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18687/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Layout changes to Application details page > -- > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/303 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7480) Render tooltips on columns where text is clipped
[ https://issues.apache.org/jira/browse/YARN-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268537#comment-16268537 ] genericqa commented on YARN-7480: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 23m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7480 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899144/YARN-7480.003.patch | | Optional Tests | asflicense shadedclient | | uname | Linux 09931eddc238 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 94bed50 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 410 (vs. ulimit of 5000) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18686/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Render tooltips on columns where text is clipped > > > Key: YARN-7480 > URL: https://issues.apache.org/jira/browse/YARN-7480 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Attachments: YARN-7480.001.patch, YARN-7480.002.patch, > YARN-7480.003.patch > > > In em-table, when text gets clipped the information is lost. Need to render a > tooltip to show the full text in these cases -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues
[ https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268544#comment-16268544 ] genericqa commented on YARN-7473: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 26s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 93 new + 252 unchanged - 8 fixed = 345 total (was 260) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 6s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 50s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.queuemanagement.GuaranteedOrZeroCapacityOverTimePolicy$PendingApplicationComparator is serializable but also an inner class of a non-serializable class At GuaranteedOrZeroCapacityOverTimePolicy.java:an inner class of a non-serializable class At GuaranteedOrZeroCapacityOverTimePolicy.java:[lines 223-239] | | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7473 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899575/YARN-7473.10.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ba750f78d132 4.4.0-64-gen
[jira] [Commented] (YARN-7499) Layout changes to Application details page
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268608#comment-16268608 ] genericqa commented on YARN-7499: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 26m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7499 | | GITHUB PR | https://github.com/apache/hadoop/pull/303 | | Optional Tests | asflicense shadedclient | | uname | Linux 011c67e21530 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 94bed50 | | maven | version: Apache Maven 3.3.9 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/18688/artifact/out/whitespace-eol.txt | | Max. process+thread count | 341 (vs. ulimit of 5000) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/18688/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Layout changes to Application details page > -- > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/303 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7499) Layout changes to Application details page
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268638#comment-16268638 ] Sunil G commented on YARN-7499: --- +1 Committing shortly if no objections. > Layout changes to Application details page > -- > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/303 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7480) Render tooltips on columns where text is clipped
[ https://issues.apache.org/jira/browse/YARN-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268639#comment-16268639 ] Sunil G commented on YARN-7480: --- +1 Commting shortly > Render tooltips on columns where text is clipped > > > Key: YARN-7480 > URL: https://issues.apache.org/jira/browse/YARN-7480 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Attachments: YARN-7480.001.patch, YARN-7480.002.patch, > YARN-7480.003.patch > > > In em-table, when text gets clipped the information is lost. Need to render a > tooltip to show the full text in these cases -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue
[ https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chuanjie.duan updated YARN-7562: Attachment: YARN-7562.002.patch > queuePlacementPolicy should not match parent queue > -- > > Key: YARN-7562 > URL: https://issues.apache.org/jira/browse/YARN-7562 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Affects Versions: 2.7.1 >Reporter: chuanjie.duan > Attachments: YARN-7562.002.patch, YARN-7562.patch > > > User algo submit a mapreduce job, console log said "root.algo is not a leaf > queue exception". > root.algo is a parent queue, it's meanless for me. Not sure why parent queue > added before > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > > > > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7562) queuePlacementPolicy should not match parent queue
[ https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268643#comment-16268643 ] chuanjie.duan commented on YARN-7562: - Sorry, I didn't paste full configuration, I have updated. What i expect is: user cim submit job, match root.cim queue(leaf queue). algo like a department, testa and testb are teams under department, each department has only one hadoop user -- algo, so algo sometimes would "set mapreduce.job.queuename=root.algo.testa " and submit their job, expect queue is "root.algo.testa" not a parent queue "root.algo". In my opinion, the rule can be more intelligent: case user, primaryGroup, secondaryGroupExistingQueue, nestedUserQueue , it should skip if queuename is a parent; (parent queue is meanless for user) case specified, returning an "not a leaf queue" exception is ok. I updated my patch > queuePlacementPolicy should not match parent queue > -- > > Key: YARN-7562 > URL: https://issues.apache.org/jira/browse/YARN-7562 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Affects Versions: 2.7.1 >Reporter: chuanjie.duan > Attachments: YARN-7562.002.patch, YARN-7562.patch > > > User algo submit a mapreduce job, console log said "root.algo is not a leaf > queue exception". > root.algo is a parent queue, it's meanless for me. Not sure why parent queue > added before > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > > > > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue
[ https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chuanjie.duan updated YARN-7562: Attachment: YARN-7562.002.patch > queuePlacementPolicy should not match parent queue > -- > > Key: YARN-7562 > URL: https://issues.apache.org/jira/browse/YARN-7562 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Affects Versions: 2.7.1 >Reporter: chuanjie.duan > Attachments: YARN-7562.002.patch, YARN-7562.patch > > > User algo submit a mapreduce job, console log said "root.algo is not a leaf > queue exception". > root.algo is a parent queue, it's meanless for me. Not sure why parent queue > added before > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > > > > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7562) queuePlacementPolicy should not match parent queue
[ https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chuanjie.duan updated YARN-7562: Attachment: (was: YARN-7562.002.patch) > queuePlacementPolicy should not match parent queue > -- > > Key: YARN-7562 > URL: https://issues.apache.org/jira/browse/YARN-7562 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Affects Versions: 2.7.1 >Reporter: chuanjie.duan > Attachments: YARN-7562.002.patch, YARN-7562.patch > > > User algo submit a mapreduce job, console log said "root.algo is not a leaf > queue exception". > root.algo is a parent queue, it's meanless for me. Not sure why parent queue > added before > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > > > > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7499) Layout changes to Application details page in new YARN UI
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7499: -- Summary: Layout changes to Application details page in new YARN UI (was: Layout changes to Application details page) > Layout changes to Application details page in new YARN UI > - > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Fix For: 3.1.0 > > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/303 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7499) Layout changes to Application details page in new YARN UI
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268699#comment-16268699 ] ASF GitHub Bot commented on YARN-7499: -- Github user sunilgovind commented on the issue: https://github.com/apache/hadoop/pull/303 Committed to trunk with sha id 641ba5c7a1471f8d799b1f919cd41daffb9da84e > Layout changes to Application details page in new YARN UI > - > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Fix For: 3.1.0 > > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/303 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7499) Layout changes to Application details page in new YARN UI
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268716#comment-16268716 ] Hudson commented on YARN-7499: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13284 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13284/]) YARN-7499. Layout changes to Application details page in new YARN UI. (sunilg: rev 641ba5c7a1471f8d799b1f919cd41daffb9da84e) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/router.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/attempts.hbs * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-services.hbs * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/yarn-app.scss * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/colors.scss * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/attempts.js * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/layout.scss * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/variables.scss * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/charts.hbs * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/configs.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/charts.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/components.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/info.hbs * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/components.hbs * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/configs.hbs * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/loading.hbs * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.scss * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/info.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-flowrun/info.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/timeline-view.hbs > Layout changes to Application details page in new YARN UI > - > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Fix For: 3.1.0 > > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/303 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7562) queuePlacementPolicy should not match parent queue
[ https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268764#comment-16268764 ] genericqa commented on YARN-7562: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 26s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 212 unchanged - 0 fixed = 214 total (was 212) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 58s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}106m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.fair.TestQueuePlacementPolicy | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7562 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899600/YARN-7562.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 39dd73a28cb7 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0ea182d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-R
[jira] [Commented] (YARN-7499) Layout changes to Application details page in new YARN UI
[ https://issues.apache.org/jira/browse/YARN-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268808#comment-16268808 ] ASF GitHub Bot commented on YARN-7499: -- Github user skmvasu closed the pull request at: https://github.com/apache/hadoop/pull/303 > Layout changes to Application details page in new YARN UI > - > > Key: YARN-7499 > URL: https://issues.apache.org/jira/browse/YARN-7499 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Fix For: 3.1.0 > > > Change the application page IA > PR: https://github.com/apache/hadoop/pull/303 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7487) Make sure volume includes GPU base libraries exists after created by plugin
[ https://issues.apache.org/jira/browse/YARN-7487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268818#comment-16268818 ] Sunil G commented on YARN-7487: --- Hi [~leftnoteasy] Generally patch looks good. Few minor comments. # we could add some tests for inspect command if possible. # In below code snippet, {code} 403 String d = null; 404 if (arr.length > 1) { 405 d = arr[1]; 406 } 407 if (volumeName.equals(v) && driverName.equals(d)) { {code} *d* could be null at a given point. Its fine to have, still better to check it ? # {{List allCaptuures = opCaptor.getAllValues();}}, In this {{allCaptuures}} to {{allCaptures}} # In {{test_docker_util.cc}}, we removed below lines and added last one. {code} -"[docker-command-execution]\n docker-command=volume\n sub-command=ls\n volume=volume1 \n driver=driver1", +"[docker-command-execution]\n docker-command=volume\n sub-command=inspect\n volume=volume1 \n driver=driver1", {code} But from sub_command level "create/ls are the only acceptable sub-command of volume" as per comment. Some difference here? > Make sure volume includes GPU base libraries exists after created by plugin > --- > > Key: YARN-7487 > URL: https://issues.apache.org/jira/browse/YARN-7487 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-7487.002.patch, YARN-7487.003.patch, > YARN-7487.wip.001.patch > > > YARN-7224 will create docker volume includes GPU base libraries when launch a > docker container which needs GPU. > This JIRA will add necessary checks to make sure docker volume exists before > launching the container to reduce debug efforts if container fails. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7480) Render tooltips on columns where text is clipped
[ https://issues.apache.org/jira/browse/YARN-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268838#comment-16268838 ] Vasudevan Skm commented on YARN-7480: - [~sunil.gov...@gmail.com]PR and patch are same. I had uploaded a copy here assuming that there PR doesn't trigger jenkins. Sorry, about the confusion. > Render tooltips on columns where text is clipped > > > Key: YARN-7480 > URL: https://issues.apache.org/jira/browse/YARN-7480 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Attachments: YARN-7480.001.patch, YARN-7480.002.patch, > YARN-7480.003.patch > > > In em-table, when text gets clipped the information is lost. Need to render a > tooltip to show the full text in these cases -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7546) In queue page node details are hidden on click
[ https://issues.apache.org/jira/browse/YARN-7546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268840#comment-16268840 ] ASF GitHub Bot commented on YARN-7546: -- Github user skmvasu closed the pull request at: https://github.com/apache/hadoop/pull/294 > In queue page node details are hidden on click > --- > > Key: YARN-7546 > URL: https://issues.apache.org/jira/browse/YARN-7546 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > > In queue page, the selected node information is on bottom. when there are a > lot of nodes, this info is hidden and users have to scroll down to see what's > hapenning -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7522) Add application tags manager implementation
[ https://issues.apache.org/jira/browse/YARN-7522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16268877#comment-16268877 ] Arun Suresh commented on YARN-7522: --- Thanks for working on this [~wangda].. In general, the idea looks fine. Couple of things to consider while fleshing this out: * We need to figure out at which point in the scheduler / container life cycle are we planning on calling the addContainer and removeContainer. I propose we do so in a scheduler agnostic manner - Somewhere in the AbstractYarnScheduler / the AppSchedulingInfo, at the point of allocation (unfortunately, I don't think AppSchdulingInfo is notified of container release/removal) or the SchedulingAppAttempt. * How we are planing on persisting this across RM restarts ? - I am not in favor of pushing all this information into ZK. This unfortunately means, the tags have to be pushed down to the NM so they can be retrieved from the NM heartbeats during RM recovery. In that case, we have to figure out how to deal with the delay from ACQUIRED to RUNNING of a container. The former is when the RM has allocated and notified the AM and the later is when the NM actually gets to know about the Container (after the AM has called start container). If we are relying on the NM to persist this information, we should update the Tag manager only after the the NM notifies the RM of the running container. > Add application tags manager implementation > --- > > Key: YARN-7522 > URL: https://issues.apache.org/jira/browse/YARN-7522 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Wangda Tan > Attachments: YARN-7522.YARN-6592.wip-001.patch > > > This is different from YARN-6596, YARN-6596 is targeted to add constraint > manager to store intra/inter application placement constraints. This JIRA is > targeted to support storing maps between container-tags/applications and > nodes. This will be required by affinity/anti-affinity implementation and > cardinality. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7573) Gpu Information page could be empty for nodes without GPU
[ https://issues.apache.org/jira/browse/YARN-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7573: -- Attachment: YARN-7573.001.patch Attaching an initial patch. Hi [~leftnoteasy], could you please help to check this in a cluster where GPUs are present. Thank You. > Gpu Information page could be empty for nodes without GPU > - > > Key: YARN-7573 > URL: https://issues.apache.org/jira/browse/YARN-7573 > Project: Hadoop YARN > Issue Type: Sub-task > Components: webapp, yarn-ui-v2 >Reporter: Sunil G >Assignee: Sunil G > Attachments: YARN-7573.001.patch > > > In new YARN UI, node page is not accessible if that node doesnt have any GPU. > Also Under node page, when we click on "List of Containers/Applications", Gpu > Information left nave is disappearing. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7573) Gpu Information page could be empty for nodes without GPU
[ https://issues.apache.org/jira/browse/YARN-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269029#comment-16269029 ] genericqa commented on YARN-7573: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 59s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 0s{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 84m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.nodemanager.webapp.TestNMWebServices | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7573 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899631/YARN-7573.001.p
[jira] [Assigned] (YARN-7455) quote_and_append_arg can overflow buffer
[ https://issues.apache.org/jira/browse/YARN-7455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan reassigned YARN-7455: - Assignee: Jim Brennan > quote_and_append_arg can overflow buffer > > > Key: YARN-7455 > URL: https://issues.apache.org/jira/browse/YARN-7455 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager >Affects Versions: 2.9.0, 3.0.0 >Reporter: Jason Lowe >Assignee: Jim Brennan > > While reviewing YARN-7197 I noticed that add_mounts in docker_util.c has a > potential buffer overflow since tmp_buffer is only 1024 bytes which may not > be sufficient to hold the specified mount path. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6647) RM can crash during transitionToStandby due to InterruptedException
[ https://issues.apache.org/jira/browse/YARN-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269045#comment-16269045 ] Jason Lowe commented on YARN-6647: -- +1 lgtm as well. Committing this. > RM can crash during transitionToStandby due to InterruptedException > --- > > Key: YARN-6647 > URL: https://issues.apache.org/jira/browse/YARN-6647 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha4 >Reporter: Jason Lowe >Assignee: Bibin A Chundatt >Priority: Critical > Attachments: YARN-6647.001.patch, YARN-6647.002.patch, > YARN-6647.003.patch, YARN-6647.004.patch, YARN-6647.005.patch > > > Noticed some tests were failing due to the JVM shutting down early. I was > able to reproduce this occasionally with TestKillApplicationWithRMHA. > Stacktrace to follow. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7480) Render tooltips on columns where text is clipped in new YARN UI
[ https://issues.apache.org/jira/browse/YARN-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269064#comment-16269064 ] ASF GitHub Bot commented on YARN-7480: -- Github user asfgit closed the pull request at: https://github.com/apache/hadoop/pull/293 > Render tooltips on columns where text is clipped in new YARN UI > --- > > Key: YARN-7480 > URL: https://issues.apache.org/jira/browse/YARN-7480 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Fix For: 3.1.0 > > Attachments: YARN-7480.001.patch, YARN-7480.002.patch, > YARN-7480.003.patch > > > In em-table, when text gets clipped the information is lost. Need to render a > tooltip to show the full text in these cases -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7480) Render tooltips on columns where text is clipped in new YARN UI
[ https://issues.apache.org/jira/browse/YARN-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7480: -- Summary: Render tooltips on columns where text is clipped in new YARN UI (was: Render tooltips on columns where text is clipped) > Render tooltips on columns where text is clipped in new YARN UI > --- > > Key: YARN-7480 > URL: https://issues.apache.org/jira/browse/YARN-7480 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Fix For: 3.1.0 > > Attachments: YARN-7480.001.patch, YARN-7480.002.patch, > YARN-7480.003.patch > > > In em-table, when text gets clipped the information is lost. Need to render a > tooltip to show the full text in these cases -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5422) ContainerLocalizer log should be logged in separate log file.
[ https://issues.apache.org/jira/browse/YARN-5422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269074#comment-16269074 ] Billie Rinaldi commented on YARN-5422: -- Looks like this is resolved in YARN-7363. > ContainerLocalizer log should be logged in separate log file. > - > > Key: YARN-5422 > URL: https://issues.apache.org/jira/browse/YARN-5422 > Project: Hadoop YARN > Issue Type: Bug > Components: applications >Affects Versions: 2.7.1 >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > > We should set the log4j for the ContainerLocalizer jvm. Currently it will use > the NM log4j and it will log the logs in NM hadoop.log file. > If NM user and application user is different, then ContainerLocalizer will > not be able to log in hadoop.log file. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7480) Render tooltips on columns where text is clipped in new YARN UI
[ https://issues.apache.org/jira/browse/YARN-7480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269094#comment-16269094 ] Hudson commented on YARN-7480: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13285 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13285/]) YARN-7480. Render tooltips on columns where text is clipped in new YARN (sunilg: rev 6b76695f886d4db7287a0425d56d5e13daf5d08d) * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/integration/components/em-table-tooltip-text-test.js * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/em-table-tooltip-text.js * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/em-table-tooltip-text.hbs > Render tooltips on columns where text is clipped in new YARN UI > --- > > Key: YARN-7480 > URL: https://issues.apache.org/jira/browse/YARN-7480 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Vasudevan Skm >Assignee: Vasudevan Skm > Fix For: 3.1.0 > > Attachments: YARN-7480.001.patch, YARN-7480.002.patch, > YARN-7480.003.patch > > > In em-table, when text gets clipped the information is lost. Need to render a > tooltip to show the full text in these cases -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
Eric Payne created YARN-7575: Summary: When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue Key: YARN-7575 URL: https://issues.apache.org/jira/browse/YARN-7575 Project: Hadoop YARN Issue Type: Bug Components: capacity scheduler Reporter: Eric Payne I encountered the following while reviewing and testing branch YARN-5881. The design document from YARN-5881 says that for max-capacity: {quote} 3) For each queue, we require: a) if max-resource not set, it automatically set to parent.max-resource {quote} When I try leaving blank {{yarn.scheduler.capacity.< queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}: {noformat} 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error handling URI: /cluster/scheduler java.lang.reflect.InvocationTargetException ... at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164) at org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129) {noformat} Also... A job will run in the leaf queue with no max capacity set and it will grow to the max capacity of the cluster, but if I add resources to the node, the job won't grow any more even though it has pending resources. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
[ https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Payne updated YARN-7575: - Issue Type: Sub-task (was: Bug) Parent: YARN-5881 > When using absolute capacity configuration with no max capacity, scheduler UI > NPEs and can't grow queue > --- > > Key: YARN-7575 > URL: https://issues.apache.org/jira/browse/YARN-7575 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Eric Payne > > I encountered the following while reviewing and testing branch YARN-5881. > The design document from YARN-5881 says that for max-capacity: > {quote} > 3) For each queue, we require: > a) if max-resource not set, it automatically set to parent.max-resource > {quote} > When I try leaving blank {{yarn.scheduler.capacity.< > queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It > looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}: > {noformat} > 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > ... > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129) > {noformat} > Also... A job will run in the leaf queue with no max capacity set and it will > grow to the max capacity of the cluster, but if I add resources to the node, > the job won't grow any more even though it has pending resources. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
[ https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated YARN-7575: -- Attachment: YARN-7575-YARN-5881.001.patch Thanks [~eepayne]. Attaching an initial patch to address the ui problem. A test case to verify relation between absolute resource config vs cluster expansion. > When using absolute capacity configuration with no max capacity, scheduler UI > NPEs and can't grow queue > --- > > Key: YARN-7575 > URL: https://issues.apache.org/jira/browse/YARN-7575 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Eric Payne > Attachments: YARN-7575-YARN-5881.001.patch > > > I encountered the following while reviewing and testing branch YARN-5881. > The design document from YARN-5881 says that for max-capacity: > {quote} > 3) For each queue, we require: > a) if max-resource not set, it automatically set to parent.max-resource > {quote} > When I try leaving blank {{yarn.scheduler.capacity.< > queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It > looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}: > {noformat} > 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > ... > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129) > {noformat} > Also... A job will run in the leaf queue with no max capacity set and it will > grow to the max capacity of the cluster, but if I add resources to the node, > the job won't grow any more even though it has pending resources. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7558) "yarn logs" command fails to get logs for running containers if UI authentication is enabled.
[ https://issues.apache.org/jira/browse/YARN-7558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269143#comment-16269143 ] Xuan Gong commented on YARN-7558: - Testcase failure is not related > "yarn logs" command fails to get logs for running containers if UI > authentication is enabled. > - > > Key: YARN-7558 > URL: https://issues.apache.org/jira/browse/YARN-7558 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Namit Maheshwari >Assignee: Xuan Gong >Priority: Critical > Attachments: YARN-7558.1.patch, YARN-7558.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6647) RM can crash during transitionToStandby due to InterruptedException
[ https://issues.apache.org/jira/browse/YARN-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269146#comment-16269146 ] Hudson commented on YARN-6647: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13286 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13286/]) YARN-6647. RM can crash during transitionToStandby due to (jlowe: rev a2c7a73e33045ce42cce19aacbe45c0421a61994) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/RMDelegationTokenSecretManager.java > RM can crash during transitionToStandby due to InterruptedException > --- > > Key: YARN-6647 > URL: https://issues.apache.org/jira/browse/YARN-6647 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha4 >Reporter: Jason Lowe >Assignee: Bibin A Chundatt >Priority: Critical > Fix For: 3.0.0 > > Attachments: YARN-6647.001.patch, YARN-6647.002.patch, > YARN-6647.003.patch, YARN-6647.004.patch, YARN-6647.005.patch > > > Noticed some tests were failing due to the JVM shutting down early. I was > able to reproduce this occasionally with TestKillApplicationWithRMHA. > Stacktrace to follow. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
[ https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269175#comment-16269175 ] Eric Payne commented on YARN-7575: -- [~sunilg], the fix for the UI NPE looks good, but the other problem I'm having is that when I increase a node size, the queue doesn't grow. My configs are as follows: - 4 node managers, 5120GB and 10 Vcores each for a total of [20480GB, 40 VCores] - {{yarn.scheduler.capacity.root.default.capacity}}: [memory=10240,vcores=20] - {{yarn.scheduler.capacity.root.eng.capacity}}: [memory=10240,vcores=20] - Note that I do not set root.capacity, nor do I set any maximum-capacity. My use case is as follows: - I start a job requesting 22.5GB and 45 vcores (container size=0.5GB) - the job consumes 20GB and 40 vcores - I add 2.5GB and 5 vcores to one of the nodes: {{yarn rmadmin -updateNodeResource host:port 7680 15}} - One more container is assigned to the job, but that only brings the job to 20.5GB and 41 vcores. > When using absolute capacity configuration with no max capacity, scheduler UI > NPEs and can't grow queue > --- > > Key: YARN-7575 > URL: https://issues.apache.org/jira/browse/YARN-7575 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Eric Payne > Attachments: YARN-7575-YARN-5881.001.patch > > > I encountered the following while reviewing and testing branch YARN-5881. > The design document from YARN-5881 says that for max-capacity: > {quote} > 3) For each queue, we require: > a) if max-resource not set, it automatically set to parent.max-resource > {quote} > When I try leaving blank {{yarn.scheduler.capacity.< > queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It > looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}: > {noformat} > 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > ... > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129) > {noformat} > Also... A job will run in the leaf queue with no max capacity set and it will > grow to the max capacity of the cluster, but if I add resources to the node, > the job won't grow any more even though it has pending resources. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
[ https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269264#comment-16269264 ] Wangda Tan commented on YARN-7575: -- Thanks [~eepayne], I just tried to reproduce but failed. Here's what I do: 1) Setup two queues, both of a/b has capacity: {{\[memory=2048,vcores=8\]}}. Without maximum-capacity setting. a.user-limit = 100 2) Register single NM with 10GB memory. 3) Run a job requests 100 GB resource. It can use all 10GB memory. 4) Update node resource to 30GB. The job can use all 30GB memory. 5) Update node resource again to 100GB. The job can use all 100GB memory. Even though this may not exactly same as your example, I think it should get the same result. Did I missed anything? > When using absolute capacity configuration with no max capacity, scheduler UI > NPEs and can't grow queue > --- > > Key: YARN-7575 > URL: https://issues.apache.org/jira/browse/YARN-7575 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Eric Payne > Attachments: YARN-7575-YARN-5881.001.patch > > > I encountered the following while reviewing and testing branch YARN-5881. > The design document from YARN-5881 says that for max-capacity: > {quote} > 3) For each queue, we require: > a) if max-resource not set, it automatically set to parent.max-resource > {quote} > When I try leaving blank {{yarn.scheduler.capacity.< > queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It > looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}: > {noformat} > 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > ... > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129) > {noformat} > Also... A job will run in the leaf queue with no max capacity set and it will > grow to the max capacity of the cluster, but if I add resources to the node, > the job won't grow any more even though it has pending resources. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7491) Make sure AM is not scheduled on an opportunistic container
[ https://issues.apache.org/jira/browse/YARN-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269285#comment-16269285 ] Miklos Szegedi commented on YARN-7491: -- We discussed this in person. So the issue is that there is no unit test newAMResourceRequest and probably it needs to be based on newResourceRequest to group the common code together. Also, please address the checkstyle issue. > Make sure AM is not scheduled on an opportunistic container > --- > > Key: YARN-7491 > URL: https://issues.apache.org/jira/browse/YARN-7491 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-7491-YARN-1011.00.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7572) Make the service status output more readable
[ https://issues.apache.org/jira/browse/YARN-7572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269304#comment-16269304 ] Vinod Kumar Vavilapalli commented on YARN-7572: --- I think in addition to a human readable output, we should still have a --json option that spits out json output for folks to script against if they'd like to. > Make the service status output more readable > - > > Key: YARN-7572 > URL: https://issues.apache.org/jira/browse/YARN-7572 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He > Fix For: yarn-native-services > > > Currently the service status output is just a JSON spec, we can make it more > human readable -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7561) Why hasContainerForNode() return false directly when there is no request of ANY locality without considering NODE_LOCAL and RACK_LOCAL?
[ https://issues.apache.org/jira/browse/YARN-7561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269324#comment-16269324 ] Robert Kanter commented on YARN-7561: - That's just the way it works: you have to *always* specify the "less specific" request(s) when specifying a request. For example, if you want a rack, you have to specify ANY, and if you want a node, you have to specify a rack and ANY. So you effectively always have to specify at least the ANY request no matter what. As to _why_ it works that way? I'm not sure - it looks to have been that way for quite a long time, possibly since the beginning. It does seem like a rather confusing requirement. > Why hasContainerForNode() return false directly when there is no request of > ANY locality without considering NODE_LOCAL and RACK_LOCAL? > --- > > Key: YARN-7561 > URL: https://issues.apache.org/jira/browse/YARN-7561 > Project: Hadoop YARN > Issue Type: Task > Components: fairscheduler >Affects Versions: 2.7.3 >Reporter: wuchang > > I am studying the FairScheduler source cod of yarn 2.7.3. > By the code of class FSAppAttempt: > {code} > public boolean hasContainerForNode(Priority prio, FSSchedulerNode node) { > ResourceRequest anyRequest = getResourceRequest(prio, > ResourceRequest.ANY); > ResourceRequest rackRequest = getResourceRequest(prio, > node.getRackName()); > ResourceRequest nodeRequest = getResourceRequest(prio, > node.getNodeName()); > > return > // There must be outstanding requests at the given priority: > anyRequest != null && anyRequest.getNumContainers() > 0 && > // If locality relaxation is turned off at *-level, there must be > a > // non-zero request for the node's rack: > (anyRequest.getRelaxLocality() || > (rackRequest != null && rackRequest.getNumContainers() > 0)) > && > // If locality relaxation is turned off at rack-level, there must > be a > // non-zero request at the node: > (rackRequest == null || rackRequest.getRelaxLocality() || > (nodeRequest != null && nodeRequest.getNumContainers() > 0)) > && > // The requested container must be able to fit on the node: > Resources.lessThanOrEqual(RESOURCE_CALCULATOR, null, > anyRequest.getCapability(), > node.getRMNode().getTotalCapability()); > } > {code} > I really cannot understand why when there is no anyRequest , > *hasContainerForNode()* return false directly without considering whether > there is NODE_LOCAL or RACK_LOCAL requests. > And , *AppSchedulingInfo.allocateNodeLocal()* and > *AppSchedulingInfo.allocateRackLocal()* will also decrease the number of > containers for *ResourceRequest.ANY*, this is another place where I feel > confused. > Really thanks for some prompt. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7576) Findbug warning for Resource exposing internal representation
Jason Lowe created YARN-7576: Summary: Findbug warning for Resource exposing internal representation Key: YARN-7576 URL: https://issues.apache.org/jira/browse/YARN-7576 Project: Hadoop YARN Issue Type: Bug Components: api Affects Versions: 3.0.0 Reporter: Jason Lowe Precommit builds are complaining about a findbugs warning: {noformat} EI org.apache.hadoop.yarn.api.records.Resource.getResources() may expose internal representation by returning Resource.resources Bug type EI_EXPOSE_REP (click for details) In class org.apache.hadoop.yarn.api.records.Resource In method org.apache.hadoop.yarn.api.records.Resource.getResources() Field org.apache.hadoop.yarn.api.records.Resource.resources At Resource.java:[line 213] Returning a reference to a mutable object value stored in one of the object's fields exposes the internal representation of the object. If instances are accessed by untrusted code, and unchecked changes to the mutable object would compromise security or other important properties, you will need to do something different. Returning a new copy of the object is better approach in many situations. {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7491) Make sure AM is not scheduled on an opportunistic container
[ https://issues.apache.org/jira/browse/YARN-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269330#comment-16269330 ] Haibo Chen commented on YARN-7491: -- Thanks [~miklos.szeg...@cloudera.com] for the comments. I have updated the patch accordingly. > Make sure AM is not scheduled on an opportunistic container > --- > > Key: YARN-7491 > URL: https://issues.apache.org/jira/browse/YARN-7491 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-7491-YARN-1011.00.patch, > YARN-7491-YARN-1011.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7491) Make sure AM is not scheduled on an opportunistic container
[ https://issues.apache.org/jira/browse/YARN-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-7491: - Attachment: YARN-7491-YARN-1011.01.patch > Make sure AM is not scheduled on an opportunistic container > --- > > Key: YARN-7491 > URL: https://issues.apache.org/jira/browse/YARN-7491 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-7491-YARN-1011.00.patch, > YARN-7491-YARN-1011.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues
[ https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zian Chen updated YARN-6124: Attachment: YARN-6124.5.patch > Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin > -refreshQueues > - > > Key: YARN-6124 > URL: https://issues.apache.org/jira/browse/YARN-6124 > Project: Hadoop YARN > Issue Type: Task >Reporter: Wangda Tan >Assignee: Zian Chen > Attachments: YARN-6124.4.patch, YARN-6124.5.patch, > YARN-6124.wip.1.patch, YARN-6124.wip.2.patch, YARN-6124.wip.3.patch > > > Now enabled / disable / update SchedulingEditPolicy config requires restart > RM. This is inconvenient when admin wants to make changes to > SchedulingEditPolicies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
Miklos Szegedi created YARN-7577: Summary: Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart Key: YARN-7577 URL: https://issues.apache.org/jira/browse/YARN-7577 Project: Hadoop YARN Issue Type: Bug Reporter: Miklos Szegedi Assignee: Miklos Szegedi This happens, if Fair Scheduler is the default. The test should run with both schedulers -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7576) Findbug warning for Resource exposing internal representation
[ https://issues.apache.org/jira/browse/YARN-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269436#comment-16269436 ] Jason Lowe commented on YARN-7576: -- This looks like it could be an artifact from YARN-7136. Pinging [~leftnoteasy] in case this is already a known issue and being handled elsewhere. > Findbug warning for Resource exposing internal representation > - > > Key: YARN-7576 > URL: https://issues.apache.org/jira/browse/YARN-7576 > Project: Hadoop YARN > Issue Type: Bug > Components: api >Affects Versions: 3.0.0 >Reporter: Jason Lowe > > Precommit builds are complaining about a findbugs warning: > {noformat} > EIorg.apache.hadoop.yarn.api.records.Resource.getResources() may expose > internal representation by returning Resource.resources > > Bug type EI_EXPOSE_REP (click for details) > In class org.apache.hadoop.yarn.api.records.Resource > In method org.apache.hadoop.yarn.api.records.Resource.getResources() > Field org.apache.hadoop.yarn.api.records.Resource.resources > At Resource.java:[line 213] > Returning a reference to a mutable object value stored in one of the object's > fields exposes the internal representation of the object. If instances are > accessed by untrusted code, and unchecked changes to the mutable object would > compromise security or other important properties, you will need to do > something different. Returning a new copy of the object is better approach in > many situations. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
[ https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269455#comment-16269455 ] Eric Payne commented on YARN-7575: -- Sorry, my bad. My ULF is set to 2.0 on the default queue. After setting it to 3.0, my use case works. On the plus side, we know that ULF works as expected with absolute capacity :) +1 on on the patch. Thanks [~sunilg] > When using absolute capacity configuration with no max capacity, scheduler UI > NPEs and can't grow queue > --- > > Key: YARN-7575 > URL: https://issues.apache.org/jira/browse/YARN-7575 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Eric Payne > Attachments: YARN-7575-YARN-5881.001.patch > > > I encountered the following while reviewing and testing branch YARN-5881. > The design document from YARN-5881 says that for max-capacity: > {quote} > 3) For each queue, we require: > a) if max-resource not set, it automatically set to parent.max-resource > {quote} > When I try leaving blank {{yarn.scheduler.capacity.< > queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It > looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}: > {noformat} > 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > ... > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129) > {noformat} > Also... A job will run in the leaf queue with no max capacity set and it will > grow to the max capacity of the cluster, but if I add resources to the node, > the job won't grow any more even though it has pending resources. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7491) Make sure AM is not scheduled on an opportunistic container
[ https://issues.apache.org/jira/browse/YARN-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269479#comment-16269479 ] genericqa commented on YARN-7491: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} YARN-1011 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 39s{color} | {color:green} YARN-1011 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s{color} | {color:green} YARN-1011 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} YARN-1011 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} YARN-1011 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in YARN-1011 has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} YARN-1011 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 50s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 40s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}110m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7491 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899667/YARN-7491-YARN-1011.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 11994d8484c2 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | YARN-1011 / 10aad13 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v
[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart
[ https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chandni Singh updated YARN-7565: Attachment: YARN-7565.001.patch Patch 1 includes: - Service Master will not release a container from previous attempt, that is not reported in the RM registration response, immediately. - The master waits for a configured amount of time for the container to be recovered. The configuration is "yarn.service.container.expiry-interval-ms" . - Once the container is recovered and reported to the master, it is started. - If the container is not recovered within the configured time, it is released. > Yarn service pre-maturely releases the container after AM restart > -- > > Key: YARN-7565 > URL: https://issues.apache.org/jira/browse/YARN-7565 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Chandni Singh >Assignee: Chandni Singh > Fix For: yarn-native-services > > Attachments: YARN-7565.001.patch > > > With YARN-6168, recovered containers can be reported to AM in response to the > AM heartbeat. > Currently, the Service Master will release the containers, that are not > reported in the AM registration response, immediately. > Instead, the master can wait for a configured amount of time for the > containers to be recovered by RM. These containers are sent to AM in the > heartbeat response. Once a container is not reported in the configured > interval, it can be released by the master. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7491) Make sure AM is not scheduled on an opportunistic container
[ https://issues.apache.org/jira/browse/YARN-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269490#comment-16269490 ] Miklos Szegedi commented on YARN-7491: -- +1 Thank you for the contribution [~haibochen]. Committing this shortly. > Make sure AM is not scheduled on an opportunistic container > --- > > Key: YARN-7491 > URL: https://issues.apache.org/jira/browse/YARN-7491 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-7491-YARN-1011.00.patch, > YARN-7491-YARN-1011.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
[ https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi updated YARN-7577: - Attachment: YARN-7577.000.patch > Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart > -- > > Key: YARN-7577 > URL: https://issues.apache.org/jira/browse/YARN-7577 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi > Attachments: YARN-7577.000.patch > > > This happens, if Fair Scheduler is the default. The test should run with both > schedulers -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7491) Make sure AM is not scheduled on an opportunistic container
[ https://issues.apache.org/jira/browse/YARN-7491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269506#comment-16269506 ] Haibo Chen commented on YARN-7491: -- Thanks [~miklos.szeg...@cloudera.com] for the reviews! > Make sure AM is not scheduled on an opportunistic container > --- > > Key: YARN-7491 > URL: https://issues.apache.org/jira/browse/YARN-7491 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-7491-YARN-1011.00.patch, > YARN-7491-YARN-1011.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
[ https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi updated YARN-7577: - Description: This happens, if Fair Scheduler is the default. The test should run with both schedulers {code} java.lang.AssertionError: Expected :-102 Actual :-106 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} was:This happens, if Fair Scheduler is the default. The test should run with both schedulers > Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart > -- > > Key: YARN-7577 > URL: https://issues.apache.org/jira/browse/YARN-7577 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi > Attachments: YARN-7577.000.patch > > > This happens, if Fair Scheduler is the default. The test should run with both > schedulers > {code} > java.lang.AssertionError: > Expected :-102 > Actual :-106 > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7576) Findbug warning for Resource exposing internal representation
[ https://issues.apache.org/jira/browse/YARN-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269561#comment-16269561 ] Daniel Templeton commented on YARN-7576: My latest patch for YARN-7556 includes an exclusion for the issue. > Findbug warning for Resource exposing internal representation > - > > Key: YARN-7576 > URL: https://issues.apache.org/jira/browse/YARN-7576 > Project: Hadoop YARN > Issue Type: Bug > Components: api >Affects Versions: 3.0.0 >Reporter: Jason Lowe > > Precommit builds are complaining about a findbugs warning: > {noformat} > EIorg.apache.hadoop.yarn.api.records.Resource.getResources() may expose > internal representation by returning Resource.resources > > Bug type EI_EXPOSE_REP (click for details) > In class org.apache.hadoop.yarn.api.records.Resource > In method org.apache.hadoop.yarn.api.records.Resource.getResources() > Field org.apache.hadoop.yarn.api.records.Resource.resources > At Resource.java:[line 213] > Returning a reference to a mutable object value stored in one of the object's > fields exposes the internal representation of the object. If instances are > accessed by untrusted code, and unchecked changes to the mutable object would > compromise security or other important properties, you will need to do > something different. Returning a new copy of the object is better approach in > many situations. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6507) Add support in NodeManager to isolate FPGA devices with CGroups
[ https://issues.apache.org/jira/browse/YARN-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269576#comment-16269576 ] Wangda Tan commented on YARN-6507: -- Thanks [~tangzhankun], Patch looks good, I will commit the patch to trunk by end of this week if no objections. > Add support in NodeManager to isolate FPGA devices with CGroups > --- > > Key: YARN-6507 > URL: https://issues.apache.org/jira/browse/YARN-6507 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Zhankun Tang >Assignee: Zhankun Tang > Attachments: YARN-6507-branch-YARN-3926.001.patch, > YARN-6507-branch-YARN-3926.002.patch, YARN-6507-trunk.001.patch, > YARN-6507-trunk.002.patch, YARN-6507-trunk.003.patch, > YARN-6507-trunk.004.patch, YARN-6507-trunk.005.patch, > YARN-6507-trunk.006.patch, YARN-6507-trunk.007.patch, > YARN-6507-trunk.008.patch, YARN-6507-trunk.009.patch, > YARN-6507-trunk.010.patch > > > Support local FPGA resource scheduler to assign/isolate N FPGA slots to a > container. > At the beginning, support one vendor plugin with basic features to serve > OpenCL applications -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues
[ https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269579#comment-16269579 ] genericqa commented on YARN-6124: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 31s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 18s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 47s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 2 new + 13 unchanged - 0 fixed = 15 total (was 13) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 23 new + 714 unchanged - 0 fixed = 737 total (was 714) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 50s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 33s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-6124 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899673/YARN-6124.5.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8391065b8aba 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 30941d9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-YARN-Build/18692/artifact/ou
[jira] [Assigned] (YARN-7576) Findbug warning for Resource exposing internal representation
[ https://issues.apache.org/jira/browse/YARN-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan reassigned YARN-7576: Assignee: Wangda Tan > Findbug warning for Resource exposing internal representation > - > > Key: YARN-7576 > URL: https://issues.apache.org/jira/browse/YARN-7576 > Project: Hadoop YARN > Issue Type: Bug > Components: api >Affects Versions: 3.0.0 >Reporter: Jason Lowe >Assignee: Wangda Tan > Attachments: YARN-7576.001.patch > > > Precommit builds are complaining about a findbugs warning: > {noformat} > EIorg.apache.hadoop.yarn.api.records.Resource.getResources() may expose > internal representation by returning Resource.resources > > Bug type EI_EXPOSE_REP (click for details) > In class org.apache.hadoop.yarn.api.records.Resource > In method org.apache.hadoop.yarn.api.records.Resource.getResources() > Field org.apache.hadoop.yarn.api.records.Resource.resources > At Resource.java:[line 213] > Returning a reference to a mutable object value stored in one of the object's > fields exposes the internal representation of the object. If instances are > accessed by untrusted code, and unchecked changes to the mutable object would > compromise security or other important properties, you will need to do > something different. Returning a new copy of the object is better approach in > many situations. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7576) Findbug warning for Resource exposing internal representation
[ https://issues.apache.org/jira/browse/YARN-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-7576: - Attachment: YARN-7576.001.patch Thanks [~jlowe] for reporting this issue. Attached ver.1 patch. > Findbug warning for Resource exposing internal representation > - > > Key: YARN-7576 > URL: https://issues.apache.org/jira/browse/YARN-7576 > Project: Hadoop YARN > Issue Type: Bug > Components: api >Affects Versions: 3.0.0 >Reporter: Jason Lowe > Attachments: YARN-7576.001.patch > > > Precommit builds are complaining about a findbugs warning: > {noformat} > EIorg.apache.hadoop.yarn.api.records.Resource.getResources() may expose > internal representation by returning Resource.resources > > Bug type EI_EXPOSE_REP (click for details) > In class org.apache.hadoop.yarn.api.records.Resource > In method org.apache.hadoop.yarn.api.records.Resource.getResources() > Field org.apache.hadoop.yarn.api.records.Resource.resources > At Resource.java:[line 213] > Returning a reference to a mutable object value stored in one of the object's > fields exposes the internal representation of the object. If instances are > accessed by untrusted code, and unchecked changes to the mutable object would > compromise security or other important properties, you will need to do > something different. Returning a new copy of the object is better approach in > many situations. > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
[ https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan reassigned YARN-7575: Assignee: Sunil G > When using absolute capacity configuration with no max capacity, scheduler UI > NPEs and can't grow queue > --- > > Key: YARN-7575 > URL: https://issues.apache.org/jira/browse/YARN-7575 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Eric Payne >Assignee: Sunil G > Attachments: YARN-7575-YARN-5881.001.patch > > > I encountered the following while reviewing and testing branch YARN-5881. > The design document from YARN-5881 says that for max-capacity: > {quote} > 3) For each queue, we require: > a) if max-resource not set, it automatically set to parent.max-resource > {quote} > When I try leaving blank {{yarn.scheduler.capacity.< > queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It > looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}: > {noformat} > 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > ... > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129) > {noformat} > Also... A job will run in the leaf queue with no max capacity set and it will > grow to the max capacity of the cluster, but if I add resources to the node, > the job won't grow any more even though it has pending resources. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
[ https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269592#comment-16269592 ] Wangda Tan commented on YARN-7575: -- Thanks [~eepayne], +1 from my side as well. > When using absolute capacity configuration with no max capacity, scheduler UI > NPEs and can't grow queue > --- > > Key: YARN-7575 > URL: https://issues.apache.org/jira/browse/YARN-7575 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Eric Payne >Assignee: Sunil G > Attachments: YARN-7575-YARN-5881.001.patch > > > I encountered the following while reviewing and testing branch YARN-5881. > The design document from YARN-5881 says that for max-capacity: > {quote} > 3) For each queue, we require: > a) if max-resource not set, it automatically set to parent.max-resource > {quote} > When I try leaving blank {{yarn.scheduler.capacity.< > queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It > looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}: > {noformat} > 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > ... > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129) > {noformat} > Also... A job will run in the leaf queue with no max capacity set and it will > grow to the max capacity of the cluster, but if I add resources to the node, > the job won't grow any more even though it has pending resources. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7520) Queue Ordering policy changes for ordering auto created leaf queues within Managed parent Queues
[ https://issues.apache.org/jira/browse/YARN-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269596#comment-16269596 ] Wangda Tan commented on YARN-7520: -- Thanks [~suma.shivaprasad], in general patch looks good to me. Could you check findbugs warning? [~sunilg] could you help to take a look at the patch as well? > Queue Ordering policy changes for ordering auto created leaf queues within > Managed parent Queues > > > Key: YARN-7520 > URL: https://issues.apache.org/jira/browse/YARN-7520 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad > Attachments: YARN-7520.1.patch, YARN-7520.2.patch > > > Queue Ordering policy currently uses priority, utilization and absolute > capacity for pre-configured parent queues to order leaf queues while > assigning containers. It needs modifications for auto created leaf queues > since they can have zero capacity -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues
[ https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269599#comment-16269599 ] Wangda Tan commented on YARN-7473: -- Thanks [~suma.shivaprasad], in general patch looks good. Could you check findbug warning / unit test failure? I want to have another set of eyes to look at the patch as well. [~sunilg] could you help with review? > Implement Framework and policy for capacity management of auto created queues > -- > > Key: YARN-7473 > URL: https://issues.apache.org/jira/browse/YARN-7473 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Suma Shivaprasad >Assignee: Suma Shivaprasad > Attachments: YARN-7473.1.patch, YARN-7473.10.patch, > YARN-7473.2.patch, YARN-7473.3.patch, YARN-7473.4.patch, YARN-7473.5.patch, > YARN-7473.6.patch, YARN-7473.7.patch, YARN-7473.8.patch, YARN-7473.9.patch > > > This jira mainly addresses the following > > 1.Support adding pluggable policies on parent queue for dynamically managing > capacity/state for leaf queues. > 2. Implement a default policy that manages capacity based on pending > applications and either grants guaranteed or zero capacity to queues based on > parent's available guaranteed capacity. > 3. Integrate with SchedulingEditPolicy framework to trigger this periodically > and signal scheduler to take necessary actions for capacity/queue management. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5594) Handle old data format while recovering RM
[ https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269622#comment-16269622 ] Robert Kanter commented on YARN-5594: - I know this is an old JIRA that hasn't been updated in over a year, but we're running into this problem now and I was doing some investigation. This is caused by YARN-2743 - it incompatibly changes the format that the tokens are stored in the RMStateStore. We (Cloudera) had actually reverted YARN-2743 from CDH as a workaround. Anyway, as is, this breaks upgrades (rolling or not) from a version of Hadoop without YARN-2743 (i.e. Hadoop < 2.6.0) to a version with it (i.e. >= Hadoop 2.6.0), if you have delegation tokens in your RMStateStore. To fix this, I think [~Tatyana But] was on the right track by having it read the old format as a fallback. Though the patch needs updating to make it work with more than just the {{FileSystemRMStateStore}}. If nobody minds, I'll take over this JIRA. {quote}What happens when the format changes again?{quote} We should try to avoid incompatibly changing the format again in the future. If we need to for some reason, we should make sure there's some path to handle it. > Handle old data format while recovering RM > -- > > Key: YARN-5594 > URL: https://issues.apache.org/jira/browse/YARN-5594 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.0 >Reporter: Tatyana But > Labels: oct16-medium > Attachments: YARN-5594.001.patch > > > We've got that error after upgrade cluster from v.2.5.1 to 2.7.0. > {noformat} > 2016-08-25 17:20:33,293 ERROR > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Failed to > load/recover state > com.google.protobuf.InvalidProtocolBufferException: Protocol message contained > an invalid tag (zero). > at > com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89) > at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108) > at > org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4680) > at > org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto.(YarnServerResourceManagerRecoveryProtos.java:4644) > at > org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4740) > at > org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$1.parsePartialFrom(YarnServerResourceManagerRecoveryProtos.java:4735) > at > org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:5075) > at > org.apache.hadoop.yarn.proto.YarnServerResourceManagerRecoveryProtos$RMDelegationTokenIdentifierDataProto$Builder.mergeFrom(YarnServerResourceManagerRecoveryProtos.java:4955) > at > com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:337) > at > com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:267) > at > com.google.protobuf.AbstractMessageLite$Builder.mergeFrom(AbstractMessageLite.java:210) > at > com.google.protobuf.AbstractMessage$Builder.mergeFrom(AbstractMessage.java:904) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.records.RMDelegationTokenIdentifierData.readFields(RMDelegationTokenIdentifierData.java:43) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadRMDTSecretManagerState(FileSystemRMStateStore.java:355) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.loadState(FileSystemRMStateStore.java:199) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:587) > at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1007) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1048) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1044 > {noformat} > The reason of this problem is that we use different formats of files > /var/mapr/cluster/yarn/rm/system/FSRMStateRoot/RMDTSecretManagerRoot/RMDelegationToken* > in these hadoop versions. > This fix handle old data format during RM recover if > InvalidProtocolBufferException occures. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (YARN-7558) "yarn logs" command fails to get logs for running containers if UI authentication is enabled.
[ https://issues.apache.org/jira/browse/YARN-7558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269672#comment-16269672 ] Junping Du commented on YARN-7558: -- Thanks [~nmaheshwari] for reporting the issue and [~xgong] for delivering a patch. The patch looks OK to me in general. [~xgong], is it possible to add a UT to cover this case? > "yarn logs" command fails to get logs for running containers if UI > authentication is enabled. > - > > Key: YARN-7558 > URL: https://issues.apache.org/jira/browse/YARN-7558 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Namit Maheshwari >Assignee: Xuan Gong >Priority: Critical > Attachments: YARN-7558.1.patch, YARN-7558.2.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues
[ https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zian Chen updated YARN-6124: Attachment: YARN-6124.6.patch > Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin > -refreshQueues > - > > Key: YARN-6124 > URL: https://issues.apache.org/jira/browse/YARN-6124 > Project: Hadoop YARN > Issue Type: Task >Reporter: Wangda Tan >Assignee: Zian Chen > Attachments: YARN-6124.4.patch, YARN-6124.5.patch, YARN-6124.6.patch, > YARN-6124.wip.1.patch, YARN-6124.wip.2.patch, YARN-6124.wip.3.patch > > > Now enabled / disable / update SchedulingEditPolicy config requires restart > RM. This is inconvenient when admin wants to make changes to > SchedulingEditPolicies. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
[ https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269714#comment-16269714 ] genericqa commented on YARN-7577: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 7 new + 39 unchanged - 1 fixed = 46 total (was 40) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 54s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7577 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899688/YARN-7577.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a40ab8998993 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 30941d9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/18694/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/18694/artifact/out/patch-unit-hadoop-yarn-project
[jira] [Commented] (YARN-7565) Yarn service pre-maturely releases the container after AM restart
[ https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269715#comment-16269715 ] genericqa commented on YARN-7565: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 57s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 30s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 58s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 6 new + 134 unchanged - 1 fixed = 140 total (was 135) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 14s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 32s{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 14s{color} | {color:red} hadoop-yarn-services-core in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 33s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}114m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.service.TestYarnNativeServices | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7565 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899683/YARN-7565.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 09d9d959a8b9 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality
[jira] [Updated] (YARN-6669) Support security for YARN service framework
[ https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6669: -- Attachment: YARN-6669.08.patch > Support security for YARN service framework > --- > > Key: YARN-6669 > URL: https://issues.apache.org/jira/browse/YARN-6669 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-6669.01.patch, YARN-6669.02.patch, > YARN-6669.03.patch, YARN-6669.04.patch, YARN-6669.05.patch, > YARN-6669.06.patch, YARN-6669.07.patch, YARN-6669.08.patch, > YARN-6669.yarn-native-services.01.patch, > YARN-6669.yarn-native-services.03.patch, > YARN-6669.yarn-native-services.04.patch, > YARN-6669.yarn-native-services.05.patch > > > Changes include: > - Make registry client to programmatically generate the jaas conf for secure > access ZK quorum > - Create a KerberosPrincipal resource object in REST API for user to supply > keberos keytab and principal > - User has two ways to configure: > -- If keytab starts with "hdfs://", the keytab will be localized by YARN > -- If keytab starts with "file://", it is assumed that the keytab are > available on the localhost. > - AM will use the keytab to log in > - ServiceClient is changed to ask hdfs delegation token when submitting the > service > - AM code will use the tokens when launching containers > - Support kerberized communication between client and AM -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6669) Support security for YARN service framework
[ https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269739#comment-16269739 ] Jian He commented on YARN-6669: --- Fixed an issue in the KerberosPrincipal object > Support security for YARN service framework > --- > > Key: YARN-6669 > URL: https://issues.apache.org/jira/browse/YARN-6669 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-6669.01.patch, YARN-6669.02.patch, > YARN-6669.03.patch, YARN-6669.04.patch, YARN-6669.05.patch, > YARN-6669.06.patch, YARN-6669.07.patch, YARN-6669.08.patch, > YARN-6669.yarn-native-services.01.patch, > YARN-6669.yarn-native-services.03.patch, > YARN-6669.yarn-native-services.04.patch, > YARN-6669.yarn-native-services.05.patch > > > Changes include: > - Make registry client to programmatically generate the jaas conf for secure > access ZK quorum > - Create a KerberosPrincipal resource object in REST API for user to supply > keberos keytab and principal > - User has two ways to configure: > -- If keytab starts with "hdfs://", the keytab will be localized by YARN > -- If keytab starts with "file://", it is assumed that the keytab are > available on the localhost. > - AM will use the keytab to log in > - ServiceClient is changed to ask hdfs delegation token when submitting the > service > - AM code will use the tokens when launching containers > - Support kerberized communication between client and AM -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled
[ https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-7381: - Target Version/s: 3.0.0 > Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled > --- > > Key: YARN-7381 > URL: https://issues.apache.org/jira/browse/YARN-7381 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0, 3.1.0 >Reporter: Xuan Gong >Assignee: Xuan Gong >Priority: Critical > Attachments: YARN-7381.1.patch > > > Enable the configuration "yarn.nodemanager.log-container-debug-info.enabled", > so we can aggregate launch_container.sh and directory.info -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled
[ https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated YARN-7381: - Priority: Critical (was: Major) > Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled > --- > > Key: YARN-7381 > URL: https://issues.apache.org/jira/browse/YARN-7381 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0, 3.1.0 >Reporter: Xuan Gong >Assignee: Xuan Gong >Priority: Critical > Attachments: YARN-7381.1.patch > > > Enable the configuration "yarn.nodemanager.log-container-debug-info.enabled", > so we can aggregate launch_container.sh and directory.info -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled
[ https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269777#comment-16269777 ] Wangda Tan commented on YARN-7381: -- Since this is important for debuggability, mark this as a critical and set target version to 3.0.0. If we don't do this in 3.0.0, it will be considered an incompatible change later in 3.1 etc. [~andrew.wang], this is a trivial change, let me know if you have any objections to this change. > Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled > --- > > Key: YARN-7381 > URL: https://issues.apache.org/jira/browse/YARN-7381 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0, 3.1.0 >Reporter: Xuan Gong >Assignee: Xuan Gong >Priority: Critical > Attachments: YARN-7381.1.patch > > > Enable the configuration "yarn.nodemanager.log-container-debug-info.enabled", > so we can aggregate launch_container.sh and directory.info -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart
[ https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chandni Singh updated YARN-7565: Attachment: YARN-7565.001.patch > Yarn service pre-maturely releases the container after AM restart > -- > > Key: YARN-7565 > URL: https://issues.apache.org/jira/browse/YARN-7565 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Chandni Singh >Assignee: Chandni Singh > Fix For: yarn-native-services > > Attachments: YARN-7565.001.patch > > > With YARN-6168, recovered containers can be reported to AM in response to the > AM heartbeat. > Currently, the Service Master will release the containers, that are not > reported in the AM registration response, immediately. > Instead, the master can wait for a configured amount of time for the > containers to be recovered by RM. These containers are sent to AM in the > heartbeat response. Once a container is not reported in the configured > interval, it can be released by the master. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7565) Yarn service pre-maturely releases the container after AM restart
[ https://issues.apache.org/jira/browse/YARN-7565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chandni Singh updated YARN-7565: Attachment: (was: YARN-7565.001.patch) > Yarn service pre-maturely releases the container after AM restart > -- > > Key: YARN-7565 > URL: https://issues.apache.org/jira/browse/YARN-7565 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Chandni Singh >Assignee: Chandni Singh > Fix For: yarn-native-services > > Attachments: YARN-7565.001.patch > > > With YARN-6168, recovered containers can be reported to AM in response to the > AM heartbeat. > Currently, the Service Master will release the containers, that are not > reported in the AM registration response, immediately. > Instead, the master can wait for a configured amount of time for the > containers to be recovered by RM. These containers are sent to AM in the > heartbeat response. Once a container is not reported in the configured > interval, it can be released by the master. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-7541) Node updates don't update the maximum cluster capability for resources other than CPU and memory
[ https://issues.apache.org/jira/browse/YARN-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-7541: --- Attachment: YARN-7541.006.patch Patch to fix formatting. Test errors are YARN-7548 and YARN-7507. Findbugs issue is YARN-7576. > Node updates don't update the maximum cluster capability for resources other > than CPU and memory > > > Key: YARN-7541 > URL: https://issues.apache.org/jira/browse/YARN-7541 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Affects Versions: 3.0.0-beta1, 3.1.0 >Reporter: Daniel Templeton >Assignee: Daniel Templeton >Priority: Critical > Attachments: YARN-7541.001.patch, YARN-7541.002.patch, > YARN-7541.003.patch, YARN-7541.004.patch, YARN-7541.005.patch, > YARN-7541.006.patch > > > When I submit an MR job that asks for too much memory or CPU for the map or > reduce, the AM will fail because it recognizes that the request is too large. > With any other resources, however, the resource requests will instead be > made and remain pending forever. Looks like we forgot to update the code > that tracks the maximum container allocation in {{ClusterNodeTracker}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
[ https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269788#comment-16269788 ] genericqa commented on YARN-7575: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} YARN-5881 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 24s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} YARN-5881 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} YARN-5881 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 11s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}104m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7575 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899652/YARN-7575-YARN-5881.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 19981ffa480a 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5881 / f7b1257 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/18696/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/18696/testReport/ | | Max. process+thread count | 876 (vs. ulimit of 5000) |
[jira] [Updated] (YARN-7577) Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart
[ https://issues.apache.org/jira/browse/YARN-7577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Szegedi updated YARN-7577: - Attachment: YARN-7577.001.patch > Unit Fail: TestAMRestart#testPreemptedAMRestartOnRMRestart > -- > > Key: YARN-7577 > URL: https://issues.apache.org/jira/browse/YARN-7577 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi > Attachments: YARN-7577.000.patch, YARN-7577.001.patch > > > This happens, if Fair Scheduler is the default. The test should run with both > schedulers > {code} > java.lang.AssertionError: > Expected :-102 > Actual :-106 > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testPreemptedAMRestartOnRMRestart(TestAMRestart.java:583) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled
[ https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269837#comment-16269837 ] genericqa commented on YARN-7381: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 51s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 11s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 1s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12895078/YARN-7381.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux c923f385c8bb 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /te
[jira] [Commented] (YARN-6669) Support security for YARN service framework
[ https://issues.apache.org/jira/browse/YARN-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269860#comment-16269860 ] genericqa commented on YARN-6669: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 2s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 35s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 21s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 59s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 23 new + 298 unchanged - 47 fixed = 321 total (was 345) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 38s{color} | {color:green} hadoop-yarn-services in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 38s{color} | {color:green} hadoop-yarn-services-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-services-api in the patch passed. {color} | | {color:gre
[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues
[ https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269873#comment-16269873 ] genericqa commented on YARN-6124: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 29s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 6 new + 713 unchanged - 2 fixed = 719 total (was 715) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 41s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 44s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-6124 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899704/YARN-6124.6.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 03db0abc51f9 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 30941d9 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/18698/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https
[jira] [Commented] (YARN-7381) Enable the configuration: yarn.nodemanager.log-container-debug-info.enabled
[ https://issues.apache.org/jira/browse/YARN-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269880#comment-16269880 ] genericqa commented on YARN-7381: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 6s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 7s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 50s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 8s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 74m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-7381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12895078/YARN-7381.1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 77c26d626817 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /test
[jira] [Commented] (YARN-7562) queuePlacementPolicy should not match parent queue
[ https://issues.apache.org/jira/browse/YARN-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269883#comment-16269883 ] Wilfred Spiegelenburg commented on YARN-7562: - Your changes will break existing configurations. You are even breaking existing use cases that are tested. The changes you are making to the tests are not acceptable: changing a parent to a leaf queue fundamentally changes the test. Please run the junit tests relevant to the code you are changing before you submit the patch. What you are after can already be achieved through config. You need to change the order in the placement rules and use ACLs if needed. This is what the placement rules should look like: {code} {code} Put ACLs on specific queues if you only want specific users or groups to use them. > queuePlacementPolicy should not match parent queue > -- > > Key: YARN-7562 > URL: https://issues.apache.org/jira/browse/YARN-7562 > Project: Hadoop YARN > Issue Type: Improvement > Components: fairscheduler, resourcemanager >Affects Versions: 2.7.1 >Reporter: chuanjie.duan > Attachments: YARN-7562.002.patch, YARN-7562.patch > > > User algo submit a mapreduce job, console log said "root.algo is not a leaf > queue exception". > root.algo is a parent queue, it's meanless for me. Not sure why parent queue > added before > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 3000 mb, 1 vcores > 24000 mb, 8 vcores > 4 > 1 > fifo > > > 300 > 4 mb, 10 vcores > 20 mb, 60 vcores > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > 300 > 4 mb, 10 vcores > 10 mb, 30 vcores > 20 > fifo > 4 > > > > > > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7576) Findbug warning for Resource exposing internal representation
[ https://issues.apache.org/jira/browse/YARN-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269908#comment-16269908 ] genericqa commented on YARN-7576: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 28s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 2 unchanged - 3 fixed = 2 total (was 5) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 22s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}196m 41s{color} | {color:black} {color}
[jira] [Commented] (YARN-5594) Handle old data format while recovering RM
[ https://issues.apache.org/jira/browse/YARN-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269926#comment-16269926 ] genericqa commented on YARN-5594: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 39s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 51s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}175m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | YARN-5594 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826364/YARN-5594.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux de120f10a7c6 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven |
[jira] [Commented] (YARN-6507) Add support in NodeManager to isolate FPGA devices with CGroups
[ https://issues.apache.org/jira/browse/YARN-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269937#comment-16269937 ] Zhankun Tang commented on YARN-6507: [~wangda], thanks for the review! > Add support in NodeManager to isolate FPGA devices with CGroups > --- > > Key: YARN-6507 > URL: https://issues.apache.org/jira/browse/YARN-6507 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Zhankun Tang >Assignee: Zhankun Tang > Attachments: YARN-6507-branch-YARN-3926.001.patch, > YARN-6507-branch-YARN-3926.002.patch, YARN-6507-trunk.001.patch, > YARN-6507-trunk.002.patch, YARN-6507-trunk.003.patch, > YARN-6507-trunk.004.patch, YARN-6507-trunk.005.patch, > YARN-6507-trunk.006.patch, YARN-6507-trunk.007.patch, > YARN-6507-trunk.008.patch, YARN-6507-trunk.009.patch, > YARN-6507-trunk.010.patch > > > Support local FPGA resource scheduler to assign/isolate N FPGA slots to a > container. > At the beginning, support one vendor plugin with basic features to serve > OpenCL applications -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6507) Add support in NodeManager to isolate FPGA devices with CGroups
[ https://issues.apache.org/jira/browse/YARN-6507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhankun Tang updated YARN-6507: --- Attachment: YARN-6507-trunk.011.patch Strange that the YARN-6507-trunk.010.patch has no QA result commented. Rename to "YARN-6507-trunk.011.patch" and submit it again. > Add support in NodeManager to isolate FPGA devices with CGroups > --- > > Key: YARN-6507 > URL: https://issues.apache.org/jira/browse/YARN-6507 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn >Reporter: Zhankun Tang >Assignee: Zhankun Tang > Attachments: YARN-6507-branch-YARN-3926.001.patch, > YARN-6507-branch-YARN-3926.002.patch, YARN-6507-trunk.001.patch, > YARN-6507-trunk.002.patch, YARN-6507-trunk.003.patch, > YARN-6507-trunk.004.patch, YARN-6507-trunk.005.patch, > YARN-6507-trunk.006.patch, YARN-6507-trunk.007.patch, > YARN-6507-trunk.008.patch, YARN-6507-trunk.009.patch, > YARN-6507-trunk.010.patch, YARN-6507-trunk.011.patch > > > Support local FPGA resource scheduler to assign/isolate N FPGA slots to a > container. > At the beginning, support one vendor plugin with basic features to serve > OpenCL applications -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-7578) Extend TestDiskFailures.waitForDiskHealthCheck() sleeping time.
Guangming Zhang created YARN-7578: - Summary: Extend TestDiskFailures.waitForDiskHealthCheck() sleeping time. Key: YARN-7578 URL: https://issues.apache.org/jira/browse/YARN-7578 Project: Hadoop YARN Issue Type: Test Affects Versions: 3.1.0 Environment: ARMv8 AArch64, Ubuntu16.04 Reporter: Guangming Zhang Priority: Minor Fix For: 3.1.0 Thread.sleep() function is called to wait for NodeManager to identify disk failures. But in some cases, for example the lower-end hardware computer, the sleep time is too short so that the NodeManager may haven't finished identifying disk failures. This will occur test errors: {code:java} Running org.apache.hadoop.yarn.server.TestDiskFailures Tests run: 3, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 17.686 sec <<< FAILURE! - in org.apache.hadoop.yarn.server.TestDiskFailures testLocalDirsFailures(org.apache.hadoop.yarn.server.TestDiskFailures) Time elapsed: 10.412 sec <<< FAILURE! java.lang.AssertionError: NodeManager could not identify disk failure. at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239) at org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:186) at org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99) testLogDirsFailures(org.apache.hadoop.yarn.server.TestDiskFailures) Time elapsed: 5.99 sec <<< FAILURE! java.lang.AssertionError: NodeManager could not identify disk failure. at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239) at org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:186) at org.apache.hadoop.yarn.server.TestDiskFailures.testLogDirsFailures(TestDiskFailures.java:111) {code} So extend the sleep time from 1000ms to 1500ms to avoid some unit test errors. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-7575) When using absolute capacity configuration with no max capacity, scheduler UI NPEs and can't grow queue
[ https://issues.apache.org/jira/browse/YARN-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269952#comment-16269952 ] Sunil G commented on YARN-7575: --- Thanks [~eepayne] and [~leftnoteasy] for verifying. Committing later today. > When using absolute capacity configuration with no max capacity, scheduler UI > NPEs and can't grow queue > --- > > Key: YARN-7575 > URL: https://issues.apache.org/jira/browse/YARN-7575 > Project: Hadoop YARN > Issue Type: Sub-task > Components: capacity scheduler >Reporter: Eric Payne >Assignee: Sunil G > Attachments: YARN-7575-YARN-5881.001.patch > > > I encountered the following while reviewing and testing branch YARN-5881. > The design document from YARN-5881 says that for max-capacity: > {quote} > 3) For each queue, we require: > a) if max-resource not set, it automatically set to parent.max-resource > {quote} > When I try leaving blank {{yarn.scheduler.capacity.< > queue-path>.maximum-capacity}}, the RMUI scheduler page refuses to render. It > looks like it's in {{CapacitySchedulerPage$ LeafQueueInfoBlock}}: > {noformat} > 2017-11-28 11:29:16,974 [qtp43473566-220] ERROR webapp.Dispatcher: error > handling URI: /cluster/scheduler > java.lang.reflect.InvocationTargetException > ... > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164) > at > org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithoutParition(CapacitySchedulerPage.java:129) > {noformat} > Also... A job will run in the leaf queue with no max capacity set and it will > grow to the max capacity of the cluster, but if I add resources to the node, > the job won't grow any more even though it has pending resources. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org