[jira] [Updated] (YARN-10187) converting README to README.md
[ https://issues.apache.org/jira/browse/YARN-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated YARN-10187: - Component/s: documentation > converting README to README.md > -- > > Key: YARN-10187 > URL: https://issues.apache.org/jira/browse/YARN-10187 > Project: Hadoop YARN > Issue Type: Improvement > Components: documentation >Reporter: N Sanketh Reddy >Priority: Major > Original Estimate: 1h > Remaining Estimate: 1h > > Converting a README to README.md for showcasing the markdown and for better > readablity -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10187) converting README to README.md
[ https://issues.apache.org/jira/browse/YARN-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056591#comment-17056591 ] Akira Ajisaka commented on YARN-10187: -- Thanks [~Skete] for your report. I think hadoop-yarn-project/hadoop-yarn/README is not maintained and can be removed. > converting README to README.md > -- > > Key: YARN-10187 > URL: https://issues.apache.org/jira/browse/YARN-10187 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: N Sanketh Reddy >Priority: Major > Original Estimate: 1h > Remaining Estimate: 1h > > Converting a README to README.md for showcasing the markdown and for better > readablity -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9879) Allow multiple leaf queues with the same name in CS
[ https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056590#comment-17056590 ] Sunil G commented on YARN-9879: --- Thanks [~shuzirra]. Appreciate the same. > Allow multiple leaf queues with the same name in CS > --- > > Key: YARN-9879 > URL: https://issues.apache.org/jira/browse/YARN-9879 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gergely Pollak >Assignee: Gergely Pollak >Priority: Major > Labels: fs2cs > Attachments: CSQueue.getQueueUsage.txt, DesignDoc_v1.pdf, > YARN-9879.POC001.patch, YARN-9879.POC002.patch, YARN-9879.POC003.patch, > YARN-9879.POC004.patch, YARN-9879.POC005.patch, YARN-9879.POC006.patch, > YARN-9879.POC007.patch, YARN-9879.POC008.patch, YARN-9879.POC009.patch, > YARN-9879.POC010.patch, YARN-9879.POC011.patch, YARN-9879.POC012.patch > > > Currently the leaf queue's name must be unique regardless of its position in > the queue hierarchy. > Design doc and first proposal is being made, I'll attach it as soon as it's > done. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9879) Allow multiple leaf queues with the same name in CS
[ https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056589#comment-17056589 ] Gergely Pollak commented on YARN-9879: -- [~wangda], [~prabhujoseph] and [~pbacsko] thank you for your feedback. Latest patch contains a lot of fixes from Szilard's review, and some serious changes to CSQueueStore to fix the queue overwrite problem. Also added some tests for the aforementioned class. I removed the condition which prevented creating multiple leaf queues with the same name, so I expect some regression, due to the changes, but I don't expect anything serious. I'll get to implementing changes suggested in the rest of the reviews, and checking the issues reported by Prabhu. Hopefully next iteration of the patch will be something very close to committable material. > Allow multiple leaf queues with the same name in CS > --- > > Key: YARN-9879 > URL: https://issues.apache.org/jira/browse/YARN-9879 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gergely Pollak >Assignee: Gergely Pollak >Priority: Major > Labels: fs2cs > Attachments: CSQueue.getQueueUsage.txt, DesignDoc_v1.pdf, > YARN-9879.POC001.patch, YARN-9879.POC002.patch, YARN-9879.POC003.patch, > YARN-9879.POC004.patch, YARN-9879.POC005.patch, YARN-9879.POC006.patch, > YARN-9879.POC007.patch, YARN-9879.POC008.patch, YARN-9879.POC009.patch, > YARN-9879.POC010.patch, YARN-9879.POC011.patch, YARN-9879.POC012.patch > > > Currently the leaf queue's name must be unique regardless of its position in > the queue hierarchy. > Design doc and first proposal is being made, I'll attach it as soon as it's > done. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9879) Allow multiple leaf queues with the same name in CS
[ https://issues.apache.org/jira/browse/YARN-9879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gergely Pollak updated YARN-9879: - Attachment: YARN-9879.POC012.patch > Allow multiple leaf queues with the same name in CS > --- > > Key: YARN-9879 > URL: https://issues.apache.org/jira/browse/YARN-9879 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Gergely Pollak >Assignee: Gergely Pollak >Priority: Major > Labels: fs2cs > Attachments: CSQueue.getQueueUsage.txt, DesignDoc_v1.pdf, > YARN-9879.POC001.patch, YARN-9879.POC002.patch, YARN-9879.POC003.patch, > YARN-9879.POC004.patch, YARN-9879.POC005.patch, YARN-9879.POC006.patch, > YARN-9879.POC007.patch, YARN-9879.POC008.patch, YARN-9879.POC009.patch, > YARN-9879.POC010.patch, YARN-9879.POC011.patch, YARN-9879.POC012.patch > > > Currently the leaf queue's name must be unique regardless of its position in > the queue hierarchy. > Design doc and first proposal is being made, I'll attach it as soon as it's > done. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10003) YarnConfigurationStore#checkVersion throws exception that belongs to RMStateStore
[ https://issues.apache.org/jira/browse/YARN-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056495#comment-17056495 ] Hadoop QA commented on YARN-10003: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 7m 6s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} branch-3.2 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 56s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} branch-3.2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} branch-3.2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}451m 1s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}515m 6s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.nodelabels.TestRMNodeLabelsManager | | | hadoop.yarn.server.resourcemanager.scheduler.TestSchedulingWithAllocationRequestId | | | hadoop.yarn.server.resourcemanager.TestRM | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueParsing | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestChildQueueOrder | | | hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation | | | hadoop.yarn.server.resourcemanager.placement.TestPlacementManager | | | hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl | | | hadoop.yarn.server.resourcemanager.TestClientRMService | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestSchedulingRequestContainerAllocation | | | hadoop.yarn.server.resourcemanager.TestApplicationACLs | | |
[jira] [Commented] (YARN-10154) CS Dynamic Queues cannot be configured with absolute resources
[ https://issues.apache.org/jira/browse/YARN-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056474#comment-17056474 ] Clay B. commented on YARN-10154: Thank you for the work on this [~maniraj...@gmail.com]! I did not notice any documentation updates in the patch? I think the changes at https://github.com/cbaenziger/hadoop/commit/3d1f4485d50d03a037268dec2634b911d7a9ae28 might be helpful, if of use? > CS Dynamic Queues cannot be configured with absolute resources > -- > > Key: YARN-10154 > URL: https://issues.apache.org/jira/browse/YARN-10154 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.3 >Reporter: Sunil G >Assignee: Manikandan R >Priority: Major > Attachments: YARN-10154.001.patch > > > In CS, ManagedParent Queue and its template cannot take absolute resource > value like > [memory=8192,vcores=8] > Thsi Jira is to track and improve the configuration reading module of > DynamicQueue to support absolute resource values. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6214) NullPointer Exception while querying timeline server API
[ https://issues.apache.org/jira/browse/YARN-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056315#comment-17056315 ] Benjamin Kim commented on YARN-6214: The root cause if one of the apps is in init status, some of properties like application type is set to null. So if you make API call with `state=FINISHED` http parameter, you won't face this issue. However, we probably need better error handling logic. > NullPointer Exception while querying timeline server API > > > Key: YARN-6214 > URL: https://issues.apache.org/jira/browse/YARN-6214 > Project: Hadoop YARN > Issue Type: Bug > Components: timelineserver >Affects Versions: 2.7.1 >Reporter: Ravi Teja Chilukuri >Priority: Major > > The apps API works fine and give all applications, including Mapreduce and Tez > http://:8188/ws/v1/applicationhistory/apps > But when queried with application types with these APIs, it fails with > NullpointerException. > http://:8188/ws/v1/applicationhistory/apps?applicationTypes=TEZ > http://:8188/ws/v1/applicationhistory/apps?applicationTypes=MAPREDUCE > NullPointerExceptionjava.lang.NullPointerException > Blocked on this issue as we are not able to run analytics on the tez job > counters on the prod jobs. > Timeline Logs: > |2017-02-22 11:47:57,183 WARN webapp.GenericExceptionHandler > (GenericExceptionHandler.java:toResponse(98)) - INTERNAL_SERVER_ERROR > java.lang.NullPointerException > at > org.apache.hadoop.yarn.server.webapp.WebServices.getApps(WebServices.java:195) > at > org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices.getApps(AHSWebServices.java:96) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:483) > at > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > at > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185) > at > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > Complete stacktrace: > http://pastebin.com/bRgxVabf -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9354) Resources should be created with ResourceTypesTestHelper instead of TestUtils
[ https://issues.apache.org/jira/browse/YARN-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056091#comment-17056091 ] Hudson commented on YARN-9354: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18041 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18041/]) YARN-9354. Resources should be created with ResourceTypesTestHelper (snemeth: rev cf9cf83a43be12a4325b02c2953c365309352649) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/resourcetypes/ResourceTypesTestHelper.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/ApplicationMasterServiceTestBase.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerWithMultiResourceTypes.java > Resources should be created with ResourceTypesTestHelper instead of TestUtils > - > > Key: YARN-9354 > URL: https://issues.apache.org/jira/browse/YARN-9354 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Trivial > Labels: newbie, newbie++ > Fix For: 3.3.0 > > Attachments: YARN-9354.001.patch, YARN-9354.002.patch, > YARN-9354.003.patch, YARN-9354.004.patch > > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils#createResource > has not identical, but very similar implementation to > org.apache.hadoop.yarn.resourcetypes.ResourceTypesTestHelper#newResource. > Since these 2 methods are doing the same essentially and > ResourceTypesTestHelper is newer and used more, TestUtils#createResource > should be replaced with ResourceTypesTestHelper#newResource with all > occurrence. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10002) Code cleanup and improvements in ConfigurationStoreBaseTest
[ https://issues.apache.org/jira/browse/YARN-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056081#comment-17056081 ] Hudson commented on YARN-10002: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18040 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18040/]) YARN-10002. Code cleanup and improvements in ConfigurationStoreBaseTest. (snemeth: rev 61f4cf3055e60e64a95f4599ebceac5a924bba02) * (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/PersistentConfigurationStoreBaseTest.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestFSSchedulerConfigurationStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestLeveldbConfigurationStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/ZKConfigurationStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/TestZKConfigurationStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/LeveldbConfigurationStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/InMemoryConfigurationStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/YarnConfigurationStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/ConfigurationStoreBaseTest.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/conf/FSSchedulerConfigurationStore.java > Code cleanup and improvements in ConfigurationStoreBaseTest > --- > > Key: YARN-10002 > URL: https://issues.apache.org/jira/browse/YARN-10002 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Fix For: 3.3.0 > > Attachments: YARN-10002.001.patch, YARN-10002.002.patch, > YARN-10002.003.patch, YARN-10002.004.patch, YARN-10002.005.patch, > YARN-10002.006.patch > > > * Some protected fields could be package-private > * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 > updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10154) CS Dynamic Queues cannot be configured with absolute resources
[ https://issues.apache.org/jira/browse/YARN-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056079#comment-17056079 ] Sunil G commented on YARN-10154: Sure [~maniraj...@gmail.com] I am looking into this. > CS Dynamic Queues cannot be configured with absolute resources > -- > > Key: YARN-10154 > URL: https://issues.apache.org/jira/browse/YARN-10154 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.1.3 >Reporter: Sunil G >Assignee: Manikandan R >Priority: Major > Attachments: YARN-10154.001.patch > > > In CS, ManagedParent Queue and its template cannot take absolute resource > value like > [memory=8192,vcores=8] > Thsi Jira is to track and improve the configuration reading module of > DynamicQueue to support absolute resource values. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-10002) Code cleanup and improvements in ConfigurationStoreBaseTest
[ https://issues.apache.org/jira/browse/YARN-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056066#comment-17056066 ] Szilard Nemeth edited comment on YARN-10002 at 3/10/20, 3:47 PM: - Hi [~bteke], Thanks for working on this patch. Next time, please do not touch the import order (I suppose it was automatically done by your IDE) as it is unnecessary change for the patch and could complicate backports, example in TestZKConfigurationStore. Anyway, you did a very good job with this refactor, latest patch LGTM, committed to trunk. Thanks [~adam.antal] for the review. [~bteke]: Please validate how complex this to be backported to branch-3.2 and do the backport if it's possible. Thanks was (Author: snemeth): Hi [~bteke], Thanks for working on this patch. Next time, please do not touch the import order (I suppose it was automatically done by your IDE) as it is unnecessary change for the patch and could complicate backports, example in TestZKConfigurationStore. Anyway, you did a very good job with this refactor, latest patch LGTM, committed to trunk. Please validate how complex this to be backported to branch-3.2 and do the backport if it's possible. Thanks > Code cleanup and improvements in ConfigurationStoreBaseTest > --- > > Key: YARN-10002 > URL: https://issues.apache.org/jira/browse/YARN-10002 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Attachments: YARN-10002.001.patch, YARN-10002.002.patch, > YARN-10002.003.patch, YARN-10002.004.patch, YARN-10002.005.patch, > YARN-10002.006.patch > > > * Some protected fields could be package-private > * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 > updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10002) Code cleanup and improvements in ConfigurationStoreBaseTest
[ https://issues.apache.org/jira/browse/YARN-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10002: -- Fix Version/s: 3.3.0 > Code cleanup and improvements in ConfigurationStoreBaseTest > --- > > Key: YARN-10002 > URL: https://issues.apache.org/jira/browse/YARN-10002 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Fix For: 3.3.0 > > Attachments: YARN-10002.001.patch, YARN-10002.002.patch, > YARN-10002.003.patch, YARN-10002.004.patch, YARN-10002.005.patch, > YARN-10002.006.patch > > > * Some protected fields could be package-private > * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 > updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9354) Resource should be created with ResourceTypesTestHelper instead of TestUtils
[ https://issues.apache.org/jira/browse/YARN-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9354: - Summary: Resource should be created with ResourceTypesTestHelper instead of TestUtils (was: TestUtils#createResource calls should be replaced with ResourceTypesTestHelper#newResource) > Resource should be created with ResourceTypesTestHelper instead of TestUtils > > > Key: YARN-9354 > URL: https://issues.apache.org/jira/browse/YARN-9354 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Trivial > Labels: newbie, newbie++ > Attachments: YARN-9354.001.patch, YARN-9354.002.patch, > YARN-9354.003.patch, YARN-9354.004.patch > > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils#createResource > has not identical, but very similar implementation to > org.apache.hadoop.yarn.resourcetypes.ResourceTypesTestHelper#newResource. > Since these 2 methods are doing the same essentially and > ResourceTypesTestHelper is newer and used more, TestUtils#createResource > should be replaced with ResourceTypesTestHelper#newResource with all > occurrence. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9354) Resources should be created with ResourceTypesTestHelper instead of TestUtils
[ https://issues.apache.org/jira/browse/YARN-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9354: - Fix Version/s: 3.3.0 > Resources should be created with ResourceTypesTestHelper instead of TestUtils > - > > Key: YARN-9354 > URL: https://issues.apache.org/jira/browse/YARN-9354 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Trivial > Labels: newbie, newbie++ > Fix For: 3.3.0 > > Attachments: YARN-9354.001.patch, YARN-9354.002.patch, > YARN-9354.003.patch, YARN-9354.004.patch > > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils#createResource > has not identical, but very similar implementation to > org.apache.hadoop.yarn.resourcetypes.ResourceTypesTestHelper#newResource. > Since these 2 methods are doing the same essentially and > ResourceTypesTestHelper is newer and used more, TestUtils#createResource > should be replaced with ResourceTypesTestHelper#newResource with all > occurrence. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9354) Resources should be created with ResourceTypesTestHelper instead of TestUtils
[ https://issues.apache.org/jira/browse/YARN-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9354: - Summary: Resources should be created with ResourceTypesTestHelper instead of TestUtils (was: Resource should be created with ResourceTypesTestHelper instead of TestUtils) > Resources should be created with ResourceTypesTestHelper instead of TestUtils > - > > Key: YARN-9354 > URL: https://issues.apache.org/jira/browse/YARN-9354 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Trivial > Labels: newbie, newbie++ > Attachments: YARN-9354.001.patch, YARN-9354.002.patch, > YARN-9354.003.patch, YARN-9354.004.patch > > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils#createResource > has not identical, but very similar implementation to > org.apache.hadoop.yarn.resourcetypes.ResourceTypesTestHelper#newResource. > Since these 2 methods are doing the same essentially and > ResourceTypesTestHelper is newer and used more, TestUtils#createResource > should be replaced with ResourceTypesTestHelper#newResource with all > occurrence. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9354) TestUtils#createResource calls should be replaced with ResourceTypesTestHelper#newResource
[ https://issues.apache.org/jira/browse/YARN-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056073#comment-17056073 ] Szilard Nemeth commented on YARN-9354: -- Thanks [~gandras] for working on this patch. Latest patch LGTM, committed to trunk. Thanks [~pbacsko] for the review. Could you please verify how complex it is to backport this patch to branch-3.2? Thanks. > TestUtils#createResource calls should be replaced with > ResourceTypesTestHelper#newResource > -- > > Key: YARN-9354 > URL: https://issues.apache.org/jira/browse/YARN-9354 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Trivial > Labels: newbie, newbie++ > Attachments: YARN-9354.001.patch, YARN-9354.002.patch, > YARN-9354.003.patch, YARN-9354.004.patch > > > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils#createResource > has not identical, but very similar implementation to > org.apache.hadoop.yarn.resourcetypes.ResourceTypesTestHelper#newResource. > Since these 2 methods are doing the same essentially and > ResourceTypesTestHelper is newer and used more, TestUtils#createResource > should be replaced with ResourceTypesTestHelper#newResource with all > occurrence. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9419) Log a warning if GPU isolation is enabled but LinuxContainerExecutor is disabled
[ https://issues.apache.org/jira/browse/YARN-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-9419: - Fix Version/s: 3.2.2 > Log a warning if GPU isolation is enabled but LinuxContainerExecutor is > disabled > > > Key: YARN-9419 > URL: https://issues.apache.org/jira/browse/YARN-9419 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Major > Fix For: 3.3.0, 3.2.2 > > Attachments: YARN-9419.001.patch, YARN-9419.002.patch, > YARN-9419.003.patch, YARN-9419.branch-3.2.001.patch > > > A WARN log should be added at least (logged once on startup) that notifies > the user about a potentially offending configuration: GPU isolation is > enabled but LCE is disabled. > I think this is a dangerous, yet valid configuration: As LCE is the only > container executor that utilizes cgroups, no real HW-isolation happens if LCE > is disabled. > Let's suppose we have 2 GPU devices in 1 node: > # NM reports 2 devices (as a Resource) to RM > # RM assigns GPU#1 to container#2 that requests 1 GPU device > # When container#2 is also requesting 1 GPU device, RM is going to assign > either GPU#1 or GPU#2, so there's no guarantee that GPU#2 will be assigned. > If GPU#1 is assigned to a second container, nasty things could happen. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9419) Log a warning if GPU isolation is enabled but LinuxContainerExecutor is disabled
[ https://issues.apache.org/jira/browse/YARN-9419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056069#comment-17056069 ] Szilard Nemeth commented on YARN-9419: -- Thanks [~gandras], Patch for 3.2 has a green build and looks good. Commited to branch-3.2 > Log a warning if GPU isolation is enabled but LinuxContainerExecutor is > disabled > > > Key: YARN-9419 > URL: https://issues.apache.org/jira/browse/YARN-9419 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-9419.001.patch, YARN-9419.002.patch, > YARN-9419.003.patch, YARN-9419.branch-3.2.001.patch > > > A WARN log should be added at least (logged once on startup) that notifies > the user about a potentially offending configuration: GPU isolation is > enabled but LCE is disabled. > I think this is a dangerous, yet valid configuration: As LCE is the only > container executor that utilizes cgroups, no real HW-isolation happens if LCE > is disabled. > Let's suppose we have 2 GPU devices in 1 node: > # NM reports 2 devices (as a Resource) to RM > # RM assigns GPU#1 to container#2 that requests 1 GPU device > # When container#2 is also requesting 1 GPU device, RM is going to assign > either GPU#1 or GPU#2, so there's no guarantee that GPU#2 will be assigned. > If GPU#1 is assigned to a second container, nasty things could happen. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10002) Code cleanup and improvements in ConfigurationStoreBaseTest
[ https://issues.apache.org/jira/browse/YARN-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10002: -- Summary: Code cleanup and improvements in ConfigurationStoreBaseTest (was: Code cleanup and improvements ConfigurationStoreBaseTest) > Code cleanup and improvements in ConfigurationStoreBaseTest > --- > > Key: YARN-10002 > URL: https://issues.apache.org/jira/browse/YARN-10002 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Attachments: YARN-10002.001.patch, YARN-10002.002.patch, > YARN-10002.003.patch, YARN-10002.004.patch, YARN-10002.005.patch, > YARN-10002.006.patch > > > * Some protected fields could be package-private > * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 > updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10002) Code cleanup and improvements ConfigurationStoreBaseTest
[ https://issues.apache.org/jira/browse/YARN-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056066#comment-17056066 ] Szilard Nemeth commented on YARN-10002: --- Hi [~bteke], Thanks for working on this patch. Next time, please do not touch the import order (I suppose it was automatically done by your IDE) as it is unnecessary change for the patch and could complicate backports, example in TestZKConfigurationStore. Anyway, you did a very good job with this refactor, latest patch LGTM, committed to trunk. Please validate how complex this to be backported to branch-3.2 and do the backport if it's possible. Thanks > Code cleanup and improvements ConfigurationStoreBaseTest > > > Key: YARN-10002 > URL: https://issues.apache.org/jira/browse/YARN-10002 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Attachments: YARN-10002.001.patch, YARN-10002.002.patch, > YARN-10002.003.patch, YARN-10002.004.patch, YARN-10002.005.patch, > YARN-10002.006.patch > > > * Some protected fields could be package-private > * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 > updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10168) FS-CS Converter: tool doesn't handle min/max resource conversion correctly
[ https://issues.apache.org/jira/browse/YARN-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056057#comment-17056057 ] Hudson commented on YARN-10168: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18039 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18039/]) YARN-10168. FS-CS Converter: tool doesn't handle min/max resource (snemeth: rev 9314ef947f4f4620943be83a73a170d9fcf3b020) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigConverter.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSQueueConverter.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-conversion.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSConfigToCSConfigRuleHandler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigRuleHandler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestConvertedConfigValidator.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/FSQueueConverter.java > FS-CS Converter: tool doesn't handle min/max resource conversion correctly > -- > > Key: YARN-10168 > URL: https://issues.apache.org/jira/browse/YARN-10168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Blocker > Labels: fs2cs > Fix For: 3.3.0 > > Attachments: YARN-10168-001.patch, YARN-10168-002.patch, > YARN-10168-003.patch, YARN-10168-004.patch, YARN-10168-005.patch > > > Trying to understand logics of convert min and max resource from FS to CS, > and found some issues: > 1) > In FSQueueConverter#emitMaximumCapacity > Existing logic in FS is to either specify a maximum percentage for queues > against cluster resources. Or, specify an absolute valued maximum resource. > In the existing FS2CS converter, when a percentage-based maximum resource is > specified, the converter takes a global resource from fs2cs CLI, and applies > percentages to that. It is not correct since the percentage-based value will > get lost, and in the future when cluster resources go up and down, the > maximum resource cannot be changed. > 2) > The logic to deal with min/weight resource is also questionable: > The existing fs2cs tool, it takes precedence of percentage over > absoluteResource, and could set both to a queue config. See > FSQueueConverter.Capacity#toString > However, in CS, comparing to FS, the weights/min resource is quite different: > CS use the same queue.capacity to specify both percentage-based or > absolute-resource-based configs (Similar to how FS deal with maximum > Resource). > The capacity defines guaranteed resource, which also impact fairshare of the > queue. (The more guaranteed resource a queue has, the larger "pie" the queue > can get if there's any additional available resource). > In FS, minResource defined the guaranteed resource, and weight defined how > much the pie can grow to. > So to me, in FS, we should pick and choose either weight or minResource to > generate CS. > 3) > In FS, mix-use of absolute-resource configs (like min/maxResource), and > percentage-based (like weight) is allowed. But in CS, it is not allowed. The > reason is discussed on YARN-5881, and find [a]Should we support specifying a > mix of percentage ... > The existing fs2cs doesn't handle the issue, which could set mixed absolute > resource and percentage-based resources. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10168) FS-CS Converter: tool doesn't handle min/max resource conversion correctly
[ https://issues.apache.org/jira/browse/YARN-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056043#comment-17056043 ] Sunil G commented on YARN-10168: Thanks [~pbacsko] and [~snemeth] > FS-CS Converter: tool doesn't handle min/max resource conversion correctly > -- > > Key: YARN-10168 > URL: https://issues.apache.org/jira/browse/YARN-10168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Blocker > Labels: fs2cs > Fix For: 3.3.0 > > Attachments: YARN-10168-001.patch, YARN-10168-002.patch, > YARN-10168-003.patch, YARN-10168-004.patch, YARN-10168-005.patch > > > Trying to understand logics of convert min and max resource from FS to CS, > and found some issues: > 1) > In FSQueueConverter#emitMaximumCapacity > Existing logic in FS is to either specify a maximum percentage for queues > against cluster resources. Or, specify an absolute valued maximum resource. > In the existing FS2CS converter, when a percentage-based maximum resource is > specified, the converter takes a global resource from fs2cs CLI, and applies > percentages to that. It is not correct since the percentage-based value will > get lost, and in the future when cluster resources go up and down, the > maximum resource cannot be changed. > 2) > The logic to deal with min/weight resource is also questionable: > The existing fs2cs tool, it takes precedence of percentage over > absoluteResource, and could set both to a queue config. See > FSQueueConverter.Capacity#toString > However, in CS, comparing to FS, the weights/min resource is quite different: > CS use the same queue.capacity to specify both percentage-based or > absolute-resource-based configs (Similar to how FS deal with maximum > Resource). > The capacity defines guaranteed resource, which also impact fairshare of the > queue. (The more guaranteed resource a queue has, the larger "pie" the queue > can get if there's any additional available resource). > In FS, minResource defined the guaranteed resource, and weight defined how > much the pie can grow to. > So to me, in FS, we should pick and choose either weight or minResource to > generate CS. > 3) > In FS, mix-use of absolute-resource configs (like min/maxResource), and > percentage-based (like weight) is allowed. But in CS, it is not allowed. The > reason is discussed on YARN-5881, and find [a]Should we support specifying a > mix of percentage ... > The existing fs2cs doesn't handle the issue, which could set mixed absolute > resource and percentage-based resources. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-10168) FS-CS Converter: tool doesn't handle min/max resource conversion correctly
[ https://issues.apache.org/jira/browse/YARN-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated YARN-10168: -- Fix Version/s: 3.3.0 > FS-CS Converter: tool doesn't handle min/max resource conversion correctly > -- > > Key: YARN-10168 > URL: https://issues.apache.org/jira/browse/YARN-10168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Blocker > Labels: fs2cs > Fix For: 3.3.0 > > Attachments: YARN-10168-001.patch, YARN-10168-002.patch, > YARN-10168-003.patch, YARN-10168-004.patch, YARN-10168-005.patch > > > Trying to understand logics of convert min and max resource from FS to CS, > and found some issues: > 1) > In FSQueueConverter#emitMaximumCapacity > Existing logic in FS is to either specify a maximum percentage for queues > against cluster resources. Or, specify an absolute valued maximum resource. > In the existing FS2CS converter, when a percentage-based maximum resource is > specified, the converter takes a global resource from fs2cs CLI, and applies > percentages to that. It is not correct since the percentage-based value will > get lost, and in the future when cluster resources go up and down, the > maximum resource cannot be changed. > 2) > The logic to deal with min/weight resource is also questionable: > The existing fs2cs tool, it takes precedence of percentage over > absoluteResource, and could set both to a queue config. See > FSQueueConverter.Capacity#toString > However, in CS, comparing to FS, the weights/min resource is quite different: > CS use the same queue.capacity to specify both percentage-based or > absolute-resource-based configs (Similar to how FS deal with maximum > Resource). > The capacity defines guaranteed resource, which also impact fairshare of the > queue. (The more guaranteed resource a queue has, the larger "pie" the queue > can get if there's any additional available resource). > In FS, minResource defined the guaranteed resource, and weight defined how > much the pie can grow to. > So to me, in FS, we should pick and choose either weight or minResource to > generate CS. > 3) > In FS, mix-use of absolute-resource configs (like min/maxResource), and > percentage-based (like weight) is allowed. But in CS, it is not allowed. The > reason is discussed on YARN-5881, and find [a]Should we support specifying a > mix of percentage ... > The existing fs2cs doesn't handle the issue, which could set mixed absolute > resource and percentage-based resources. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10168) FS-CS Converter: tool doesn't handle min/max resource conversion correctly
[ https://issues.apache.org/jira/browse/YARN-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056037#comment-17056037 ] Szilard Nemeth commented on YARN-10168: --- Hi [~pbacsko], Latest patch looks good to me, committed to trunk. Thanks [~leftnoteasy] for the comments and [~bteke] for the review. Resolving jira. > FS-CS Converter: tool doesn't handle min/max resource conversion correctly > -- > > Key: YARN-10168 > URL: https://issues.apache.org/jira/browse/YARN-10168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Blocker > Labels: fs2cs > Attachments: YARN-10168-001.patch, YARN-10168-002.patch, > YARN-10168-003.patch, YARN-10168-004.patch, YARN-10168-005.patch > > > Trying to understand logics of convert min and max resource from FS to CS, > and found some issues: > 1) > In FSQueueConverter#emitMaximumCapacity > Existing logic in FS is to either specify a maximum percentage for queues > against cluster resources. Or, specify an absolute valued maximum resource. > In the existing FS2CS converter, when a percentage-based maximum resource is > specified, the converter takes a global resource from fs2cs CLI, and applies > percentages to that. It is not correct since the percentage-based value will > get lost, and in the future when cluster resources go up and down, the > maximum resource cannot be changed. > 2) > The logic to deal with min/weight resource is also questionable: > The existing fs2cs tool, it takes precedence of percentage over > absoluteResource, and could set both to a queue config. See > FSQueueConverter.Capacity#toString > However, in CS, comparing to FS, the weights/min resource is quite different: > CS use the same queue.capacity to specify both percentage-based or > absolute-resource-based configs (Similar to how FS deal with maximum > Resource). > The capacity defines guaranteed resource, which also impact fairshare of the > queue. (The more guaranteed resource a queue has, the larger "pie" the queue > can get if there's any additional available resource). > In FS, minResource defined the guaranteed resource, and weight defined how > much the pie can grow to. > So to me, in FS, we should pick and choose either weight or minResource to > generate CS. > 3) > In FS, mix-use of absolute-resource configs (like min/maxResource), and > percentage-based (like weight) is allowed. But in CS, it is not allowed. The > reason is discussed on YARN-5881, and find [a]Should we support specifying a > mix of percentage ... > The existing fs2cs doesn't handle the issue, which could set mixed absolute > resource and percentage-based resources. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-942) In Fair Scheduler documentation, inconsistency on which properties have prefix
[ https://issues.apache.org/jira/browse/YARN-942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056015#comment-17056015 ] Hudson commented on YARN-942: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18038 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18038/]) YARN-942. (ericp: rev ede05b19d1723147430fc426161326d46698507f) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerQueuing.java > In Fair Scheduler documentation, inconsistency on which properties have prefix > -- > > Key: YARN-942 > URL: https://issues.apache.org/jira/browse/YARN-942 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler >Affects Versions: 2.1.0-beta >Reporter: Sandy Ryza >Assignee: Akira Ajisaka >Priority: Major > Labels: documentation, newbie > Fix For: 2.1.1-beta > > Attachments: YARN-942.patch > > > locality.threshold.node and locality.threshold.rack should have the > yarn.scheduler.fair prefix like the items before them > http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10002) Code cleanup and improvements ConfigurationStoreBaseTest
[ https://issues.apache.org/jira/browse/YARN-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055971#comment-17055971 ] Adam Antal commented on YARN-10002: --- Thanks for the patch [~bteke]! I liked you initiative of cleaning the interface (I did not want to explicitly request that). +1 (non-binding) > Code cleanup and improvements ConfigurationStoreBaseTest > > > Key: YARN-10002 > URL: https://issues.apache.org/jira/browse/YARN-10002 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Minor > Attachments: YARN-10002.001.patch, YARN-10002.002.patch, > YARN-10002.003.patch, YARN-10002.004.patch, YARN-10002.005.patch, > YARN-10002.006.patch > > > * Some protected fields could be package-private > * Could add a helper method that prepares a simple LogMutation with 1, 2 or 3 > updates (Key + value) as this pattern is used extensively in subclasses -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-10003) YarnConfigurationStore#checkVersion throws exception that belongs to RMStateStore
[ https://issues.apache.org/jira/browse/YARN-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055842#comment-17055842 ] Hadoop QA commented on YARN-10003: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 20s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} branch-3.2 Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 3m 4s{color} | {color:red} root in branch-3.2 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 50s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-3.2 failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 16s{color} | {color:orange} The patch fails to run checkstyle in hadoop-yarn-server-resourcemanager {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-3.2 failed. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 1m 28s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-3.2 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s{color} | {color:red} hadoop-yarn-server-resourcemanager in branch-3.2 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 17s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 18s{color} | {color:orange} The patch fails to run checkstyle in hadoop-yarn-server-resourcemanager {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 18s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 20m 11s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 18s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 54s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.7 Server=19.03.7 Image:yetus/hadoop:0f25cbbb251 | | JIRA Issue | YARN-10003 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12996268/YARN-10003.branch-3.2.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3f29615a90a8 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.2 / 74fa55a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | mvninstall | https://builds.apache.org/job/PreCommit-YARN-Build/25671/artifact/out/branch-mvninstall-root.txt | | compile |
[jira] [Updated] (YARN-10003) YarnConfigurationStore#checkVersion throws exception that belongs to RMStateStore
[ https://issues.apache.org/jira/browse/YARN-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Teke updated YARN-10003: - Attachment: YARN-10003.branch-3.2.001.patch > YarnConfigurationStore#checkVersion throws exception that belongs to > RMStateStore > - > > Key: YARN-10003 > URL: https://issues.apache.org/jira/browse/YARN-10003 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Benjamin Teke >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-10003.001.patch, YARN-10003.002.patch, > YARN-10003.003.patch, YARN-10003.004.patch, YARN-10003.005.patch, > YARN-10003.branch-3.2.001.patch > > > RMStateVersionIncompatibleException is thrown from method "checkVersion". > Moreover, there's a TODO here saying this method is copied from RMStateStore. > We should revise this method a bit. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-9997) Code cleanup in ZKConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055754#comment-17055754 ] Hadoop QA commented on YARN-9997: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 16s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 18s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}147m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | | | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.7 Server=19.03.7 Image:yetus/hadoop:c44943d1fc3 | | JIRA Issue | YARN-9997 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12996245/YARN-9997.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 432e36228ab4 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 44afe11 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/25670/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/25670/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-YARN-Build/25670/artifact/out/patch-asflicense-problems.txt | | Max. process+thread
[jira] [Commented] (YARN-10168) FS-CS Converter: tool doesn't handle min/max resource conversion correctly
[ https://issues.apache.org/jira/browse/YARN-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055691#comment-17055691 ] Peter Bacsko commented on YARN-10168: - [~bteke] good catch, removed that. > FS-CS Converter: tool doesn't handle min/max resource conversion correctly > -- > > Key: YARN-10168 > URL: https://issues.apache.org/jira/browse/YARN-10168 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Blocker > Labels: fs2cs > Attachments: YARN-10168-001.patch, YARN-10168-002.patch, > YARN-10168-003.patch, YARN-10168-004.patch, YARN-10168-005.patch > > > Trying to understand logics of convert min and max resource from FS to CS, > and found some issues: > 1) > In FSQueueConverter#emitMaximumCapacity > Existing logic in FS is to either specify a maximum percentage for queues > against cluster resources. Or, specify an absolute valued maximum resource. > In the existing FS2CS converter, when a percentage-based maximum resource is > specified, the converter takes a global resource from fs2cs CLI, and applies > percentages to that. It is not correct since the percentage-based value will > get lost, and in the future when cluster resources go up and down, the > maximum resource cannot be changed. > 2) > The logic to deal with min/weight resource is also questionable: > The existing fs2cs tool, it takes precedence of percentage over > absoluteResource, and could set both to a queue config. See > FSQueueConverter.Capacity#toString > However, in CS, comparing to FS, the weights/min resource is quite different: > CS use the same queue.capacity to specify both percentage-based or > absolute-resource-based configs (Similar to how FS deal with maximum > Resource). > The capacity defines guaranteed resource, which also impact fairshare of the > queue. (The more guaranteed resource a queue has, the larger "pie" the queue > can get if there's any additional available resource). > In FS, minResource defined the guaranteed resource, and weight defined how > much the pie can grow to. > So to me, in FS, we should pick and choose either weight or minResource to > generate CS. > 3) > In FS, mix-use of absolute-resource configs (like min/maxResource), and > percentage-based (like weight) is allowed. But in CS, it is not allowed. The > reason is discussed on YARN-5881, and find [a]Should we support specifying a > mix of percentage ... > The existing fs2cs doesn't handle the issue, which could set mixed absolute > resource and percentage-based resources. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-9997) Code cleanup in ZKConfigurationStore
[ https://issues.apache.org/jira/browse/YARN-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Gyori updated YARN-9997: --- Attachment: YARN-9997.005.patch > Code cleanup in ZKConfigurationStore > > > Key: YARN-9997 > URL: https://issues.apache.org/jira/browse/YARN-9997 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Andras Gyori >Priority: Minor > Attachments: YARN-9997.001.patch, YARN-9997.002.patch, > YARN-9997.003.patch, YARN-9997.004.patch, YARN-9997.005.patch > > > Many thins can be improved: > * znodeParentPath could be a local variable > * zkManager could be private, VisibleForTesting annotation is not needed > anymore > * Do something with unchecked casts > * zkManager.safeSetData calls are almost having the same set of parameters: > Simplify this > * Extract zkManager calls to their own methods: They are repeated > * Remove TODOs -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org