[jira] [Updated] (YARN-1305) RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi OZAWA updated YARN-1305: - Attachment: YARN-1305.2.patch Updated patch to use throwBadConfigurationException when HAUtil#setConfValue() gets invalid input. RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException --- Key: YARN-1305 URL: https://issues.apache.org/jira/browse/YARN-1305 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.2.1 Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Labels: ha Attachments: YARN-1305.1.patch, YARN-1305.2.patch When yarn.resourcemanager.ha.enabled is true, RMHAProtocolService#serviceInit calls HAUtil.setAllRpcAddresses. If the configuration values are null, it just throws IllegalArgumentException. It's messy to analyse which keys are null, so we should handle it and log the name of keys which are null. A current log dump is as follows: {code} 2013-10-15 06:24:53,431 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2013-10-15 06:24:54,203 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: java.lang.IllegalArgumentException: Property value must not be null java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:816) at org.apache.hadoop.conf.Configuration.set(Configuration.java:798) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:100) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:105) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1305) RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13796508#comment-13796508 ] Hadoop QA commented on YARN-1305: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608669/YARN-1305.2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2186//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2186//console This message is automatically generated. RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException --- Key: YARN-1305 URL: https://issues.apache.org/jira/browse/YARN-1305 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.2.1 Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Labels: ha Attachments: YARN-1305.1.patch, YARN-1305.2.patch When yarn.resourcemanager.ha.enabled is true, RMHAProtocolService#serviceInit calls HAUtil.setAllRpcAddresses. If the configuration values are null, it just throws IllegalArgumentException. It's messy to analyse which keys are null, so we should handle it and log the name of keys which are null. A current log dump is as follows: {code} 2013-10-15 06:24:53,431 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2013-10-15 06:24:54,203 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: java.lang.IllegalArgumentException: Property value must not be null java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:816) at org.apache.hadoop.conf.Configuration.set(Configuration.java:798) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:100) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:105) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (YARN-1313) userlog not delete at NodeManager start
shenhong created YARN-1313: -- Summary: userlog not delete at NodeManager start Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Reporter: shenhong -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1313) userlog hadn't delete at NodeManager start
[ https://issues.apache.org/jira/browse/YARN-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shenhong updated YARN-1313: --- Summary: userlog hadn't delete at NodeManager start (was: userlog not delete at NodeManager start) userlog hadn't delete at NodeManager start -- Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Reporter: shenhong -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1313) Userlog hadn't delete at NodeManager start
[ https://issues.apache.org/jira/browse/YARN-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shenhong updated YARN-1313: --- Summary: Userlog hadn't delete at NodeManager start (was: userlog hadn't delete at NodeManager start) Userlog hadn't delete at NodeManager start -- Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Reporter: shenhong At present, userlog should be delete at NodeManager start, -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1313) userlog hadn't delete at NodeManager start
[ https://issues.apache.org/jira/browse/YARN-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shenhong updated YARN-1313: --- Description: At present, userlog should be delete at NodeManager start, userlog hadn't delete at NodeManager start -- Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Reporter: shenhong At present, userlog should be delete at NodeManager start, -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1313) Userlog hadn't delete at NodeManager start
[ https://issues.apache.org/jira/browse/YARN-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shenhong updated YARN-1313: --- Description: At present, userlog hadn't delete at NodeManager start, should we delete then? (was: At present, userlog are hadn't delete at NodeManager start, should we delete then?) Userlog hadn't delete at NodeManager start -- Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Reporter: shenhong At present, userlog hadn't delete at NodeManager start, should we delete then? -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1313) Userlog hadn't delete at NodeManager start
[ https://issues.apache.org/jira/browse/YARN-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shenhong updated YARN-1313: --- Description: At present, userlog are hadn't delete at NodeManager start, should we delete then? (was: At present, userlog should be delete at NodeManager start, ) Userlog hadn't delete at NodeManager start -- Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Reporter: shenhong At present, userlog are hadn't delete at NodeManager start, should we delete then? -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1313) Userlog hadn't delete at NodeManager start
[ https://issues.apache.org/jira/browse/YARN-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shenhong updated YARN-1313: --- Component/s: nodemanager Userlog hadn't delete at NodeManager start -- Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Affects Versions: 2.2.0 Reporter: shenhong At present, userlog hadn't delete at NodeManager start, I thank we should delete userlog before NM start. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1313) Userlog hadn't delete at NodeManager start
[ https://issues.apache.org/jira/browse/YARN-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shenhong updated YARN-1313: --- Affects Version/s: 2.2.0 Userlog hadn't delete at NodeManager start -- Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Affects Versions: 2.2.0 Reporter: shenhong At present, userlog hadn't delete at NodeManager start, I thank we should delete userlog before NM start. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1313) Userlog hadn't delete at NodeManager start
[ https://issues.apache.org/jira/browse/YARN-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shenhong updated YARN-1313: --- Description: At present, userlog hadn't delete at NodeManager start, I thank we should delete userlog before NM start. (was: At present, userlog hadn't delete at NodeManager start, should we delete then?) Userlog hadn't delete at NodeManager start -- Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Affects Versions: 2.2.0 Reporter: shenhong At present, userlog hadn't delete at NodeManager start, I thank we should delete userlog before NM start. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Resolved] (YARN-1313) Userlog hadn't delete at NodeManager start
[ https://issues.apache.org/jira/browse/YARN-1313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe resolved YARN-1313. -- Resolution: Duplicate This is a duplicate of YARN-194. Userlog hadn't delete at NodeManager start -- Key: YARN-1313 URL: https://issues.apache.org/jira/browse/YARN-1313 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Affects Versions: 2.2.0 Reporter: shenhong At present, userlog hadn't delete at NodeManager start, I thank we should delete userlog before NM start. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1305) RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13796879#comment-13796879 ] Bikas Saha commented on YARN-1305: -- First of all, how do we hit the original bug? Resource manager addresses have default values and so they should not end up being null even if they are not setup by the user. About the patch Is there a way to pass the original exception inside the badconfiguration exception? Can we make the error message more generic? e.g. invalid value of rm_ha_id. {code} + if (confKey == null) { +// Error at addSuffix +errmsg = YarnConfiguration.RM_HA_ID + + cannot be allowed to start with '.'; {code} Why do we need to log the current conf value in the error? {code} +// Error at Configuration#set +errmsg = confKey + needs to be set correctly + + in a HA configuration. Configured value is: + confValue; {code} RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException --- Key: YARN-1305 URL: https://issues.apache.org/jira/browse/YARN-1305 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.2.1 Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Labels: ha Attachments: YARN-1305.1.patch, YARN-1305.2.patch When yarn.resourcemanager.ha.enabled is true, RMHAProtocolService#serviceInit calls HAUtil.setAllRpcAddresses. If the configuration values are null, it just throws IllegalArgumentException. It's messy to analyse which keys are null, so we should handle it and log the name of keys which are null. A current log dump is as follows: {code} 2013-10-15 06:24:53,431 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2013-10-15 06:24:54,203 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: java.lang.IllegalArgumentException: Property value must not be null java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:816) at org.apache.hadoop.conf.Configuration.set(Configuration.java:798) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:100) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:105) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1303) Allow multiple commands separating with ;
[ https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-1303: Attachment: YARN-1303.3.patch Allow multiple commands separating with ; - Key: YARN-1303 URL: https://issues.apache.org/jira/browse/YARN-1303 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1303.1.patch, YARN-1303.2.patch, YARN-1303.3.patch In shell, we can do ls; ls to run 2 commands at once. In distributed shell, this is not working. We should improve to allow this to occur. There are practical use cases that I know of to run multiple commands or to set environment variables before a command. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1305) RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13796927#comment-13796927 ] Tsuyoshi OZAWA commented on YARN-1305: -- Thank you for review, Bikas. I got this bug when both yarn.resourcemanager.ha.enabled and yarn.resourcemanager.ha.id are set but HAUtil.RPC_ADDRESS_CONF_KEYS with RM id are empty like this: https://gist.github.com/oza/6982570. We should set default values of HAUtil.RPC_ADDRESS_CONF_KEYS in getConfValueForRMInstance(). {code} private static String getConfValueForRMInstance(String prefix, Configuration conf, String defaultValue) { String confKey = getConfKeyForRMInstance(prefix, conf); String retVal = conf.get(confKey, defaultValue); return retVal; } {code} Can we make the error message more generic? We can do it with Exception#getMessage(). {code} if (confKey == null) { errmsg = iae.getMessage(); } ... throwBadConfigurationException(errmsg); {code} Why do we need to log the current conf value in the error? As you indicated, confValue must be null in this code path, so we don't need to log confValue. I'll remove it from logging. RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException --- Key: YARN-1305 URL: https://issues.apache.org/jira/browse/YARN-1305 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.2.1 Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Labels: ha Attachments: YARN-1305.1.patch, YARN-1305.2.patch When yarn.resourcemanager.ha.enabled is true, RMHAProtocolService#serviceInit calls HAUtil.setAllRpcAddresses. If the configuration values are null, it just throws IllegalArgumentException. It's messy to analyse which keys are null, so we should handle it and log the name of keys which are null. A current log dump is as follows: {code} 2013-10-15 06:24:53,431 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2013-10-15 06:24:54,203 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: java.lang.IllegalArgumentException: Property value must not be null java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:816) at org.apache.hadoop.conf.Configuration.set(Configuration.java:798) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:100) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:105) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1303) Allow multiple commands separating with ;
[ https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13796933#comment-13796933 ] Hadoop QA commented on YARN-1303: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608727/YARN-1303.3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell: org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2187//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2187//console This message is automatically generated. Allow multiple commands separating with ; - Key: YARN-1303 URL: https://issues.apache.org/jira/browse/YARN-1303 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1303.1.patch, YARN-1303.2.patch, YARN-1303.3.patch In shell, we can do ls; ls to run 2 commands at once. In distributed shell, this is not working. We should improve to allow this to occur. There are practical use cases that I know of to run multiple commands or to set environment variables before a command. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1305) RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13796962#comment-13796962 ] Bikas Saha commented on YARN-1305: -- So the error is that yarn.resourcemanager.address.rm1 was not set in the conf. It should clearly show that yarn.resourcemanager.address.rm1 was not set. It should say that rm-id specific config is missing. Can you please paste the error message with your latest updated patch? RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException --- Key: YARN-1305 URL: https://issues.apache.org/jira/browse/YARN-1305 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.2.1 Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Labels: ha Attachments: YARN-1305.1.patch, YARN-1305.2.patch When yarn.resourcemanager.ha.enabled is true, RMHAProtocolService#serviceInit calls HAUtil.setAllRpcAddresses. If the configuration values are null, it just throws IllegalArgumentException. It's messy to analyse which keys are null, so we should handle it and log the name of keys which are null. A current log dump is as follows: {code} 2013-10-15 06:24:53,431 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2013-10-15 06:24:54,203 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: java.lang.IllegalArgumentException: Property value must not be null java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:816) at org.apache.hadoop.conf.Configuration.set(Configuration.java:798) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:100) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:105) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1305) RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13796980#comment-13796980 ] Tsuyoshi OZAWA commented on YARN-1305: -- The error message with my latest patch is as follows: {code} 2013-10-16 16:56:40,978 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid configuration! yarn.resourcemanager.address.rm1 needs to be set correctlyin a HA configuration. Configured value is: null org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid configuration! yarn.resourcemanager.address.rm1 needs to be set correctlyin a HA configuration. Configured value is: null at org.apache.hadoop.yarn.conf.HAUtil.throwBadConfigurationException(HAUtil.java:48) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:125) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:131) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException --- Key: YARN-1305 URL: https://issues.apache.org/jira/browse/YARN-1305 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.2.1 Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Labels: ha Attachments: YARN-1305.1.patch, YARN-1305.2.patch When yarn.resourcemanager.ha.enabled is true, RMHAProtocolService#serviceInit calls HAUtil.setAllRpcAddresses. If the configuration values are null, it just throws IllegalArgumentException. It's messy to analyse which keys are null, so we should handle it and log the name of keys which are null. A current log dump is as follows: {code} 2013-10-15 06:24:53,431 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2013-10-15 06:24:54,203 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: java.lang.IllegalArgumentException: Property value must not be null java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:816) at org.apache.hadoop.conf.Configuration.set(Configuration.java:798) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:100) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:105) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1172) Convert *SecretManagers in the RM to services
[ https://issues.apache.org/jira/browse/YARN-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13796993#comment-13796993 ] Tsuyoshi OZAWA commented on YARN-1172: -- The latest patch is just refactoring, so we don't need more tests IMO. Convert *SecretManagers in the RM to services - Key: YARN-1172 URL: https://issues.apache.org/jira/browse/YARN-1172 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.1.0-beta Reporter: Karthik Kambatla Assignee: Tsuyoshi OZAWA Attachments: YARN-1172.1.patch, YARN-1172.2.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-807) When querying apps by queue, iterating over all apps is inefficient and limiting
[ https://issues.apache.org/jira/browse/YARN-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797012#comment-13797012 ] Thomas Graves commented on YARN-807: any reason this hasn't been committed? When querying apps by queue, iterating over all apps is inefficient and limiting - Key: YARN-807 URL: https://issues.apache.org/jira/browse/YARN-807 Project: Hadoop YARN Issue Type: Improvement Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-807.patch The question which apps are in queue x can be asked via the RM REST APIs, through the ClientRMService, and through the command line. In all these cases, the question is answered by scanning through every RMApp and filtering by the app's queue name. All schedulers maintain a mapping of queues to applications. I think it would make more sense to ask the schedulers which applications are in a given queue. This is what was done in MR1. This would also have the advantage of allowing a parent queue to return all the applications on leaf queues under it, and allow queue name aliases, as in the way that root.default and default refer to the same queue in the fair scheduler. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1303) Allow multiple commands separating with ;
[ https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797013#comment-13797013 ] Hadoop QA commented on YARN-1303: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608743/YARN-1303.3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell: org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2188//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2188//console This message is automatically generated. Allow multiple commands separating with ; - Key: YARN-1303 URL: https://issues.apache.org/jira/browse/YARN-1303 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1303.1.patch, YARN-1303.2.patch, YARN-1303.3.patch, YARN-1303.3.patch In shell, we can do ls; ls to run 2 commands at once. In distributed shell, this is not working. We should improve to allow this to occur. There are practical use cases that I know of to run multiple commands or to set environment variables before a command. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-807) When querying apps by queue, iterating over all apps is inefficient and limiting
[ https://issues.apache.org/jira/browse/YARN-807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797019#comment-13797019 ] Sandy Ryza commented on YARN-807: - [~acmurthy]'s +1 was to the idea. This hasn't been reviewed yet (but is ready for review). When querying apps by queue, iterating over all apps is inefficient and limiting - Key: YARN-807 URL: https://issues.apache.org/jira/browse/YARN-807 Project: Hadoop YARN Issue Type: Improvement Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-807.patch The question which apps are in queue x can be asked via the RM REST APIs, through the ClientRMService, and through the command line. In all these cases, the question is answered by scanning through every RMApp and filtering by the app's queue name. All schedulers maintain a mapping of queues to applications. I think it would make more sense to ask the schedulers which applications are in a given queue. This is what was done in MR1. This would also have the advantage of allowing a parent queue to return all the applications on leaf queues under it, and allow queue name aliases, as in the way that root.default and default refer to the same queue in the fair scheduler. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1303) Allow multiple commands separating with ;
[ https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-1303: Attachment: YARN-1303.4.patch Allow multiple commands separating with ; - Key: YARN-1303 URL: https://issues.apache.org/jira/browse/YARN-1303 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1303.1.patch, YARN-1303.2.patch, YARN-1303.3.patch, YARN-1303.3.patch, YARN-1303.4.patch In shell, we can do ls; ls to run 2 commands at once. In distributed shell, this is not working. We should improve to allow this to occur. There are practical use cases that I know of to run multiple commands or to set environment variables before a command. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1305) RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797021#comment-13797021 ] Tsuyoshi OZAWA commented on YARN-1305: -- Can we make the error message more generic? e.g. invalid value of rm_ha_id. Ah, your proposal is better. I'll update a patch. RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException --- Key: YARN-1305 URL: https://issues.apache.org/jira/browse/YARN-1305 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.2.1 Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Labels: ha Attachments: YARN-1305.1.patch, YARN-1305.2.patch When yarn.resourcemanager.ha.enabled is true, RMHAProtocolService#serviceInit calls HAUtil.setAllRpcAddresses. If the configuration values are null, it just throws IllegalArgumentException. It's messy to analyse which keys are null, so we should handle it and log the name of keys which are null. A current log dump is as follows: {code} 2013-10-15 06:24:53,431 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2013-10-15 06:24:54,203 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: java.lang.IllegalArgumentException: Property value must not be null java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:816) at org.apache.hadoop.conf.Configuration.set(Configuration.java:798) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:100) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:105) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1305) RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsuyoshi OZAWA updated YARN-1305: - Attachment: YARN-1305.3.patch Updated a patch based on Bikas's review for better error message. The case without yarn.resourcemanager.address.RM_HA_ID: {code} 2013-10-16 17:38:48,358 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid configuration! yarn.resourcemanager.address.rm1 needs to be set correctly in a HA configuration. org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid configuration! yarn.resourcemanager.address.rm1 needs to be set correctly in a HA configuration. at org.apache.hadoop.yarn.conf.HAUtil.throwBadConfigurationException(HAUtil.java:48) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:124) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:130) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) 2013-10-16 17:38:48,361 INFO org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService: Transitioning to standby 2013-10-16 17:38:48,361 INFO org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService: Transitioned to standby {code} The case with invalid yarn.resourcemanager.address.RM_HA_ID(starting with '.'): {code} 2013-10-16 17:44:40,467 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid configuration! Invalid value of yarn.resourcemanager.ha.id org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Invalid configuration! Invalid value of yarn.resourcemanager.ha.id at org.apache.hadoop.yarn.conf.HAUtil.throwBadConfigurationException(HAUtil.java:48) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:124) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:130) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException --- Key: YARN-1305 URL: https://issues.apache.org/jira/browse/YARN-1305 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.2.1 Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Labels: ha Attachments: YARN-1305.1.patch, YARN-1305.2.patch, YARN-1305.3.patch When yarn.resourcemanager.ha.enabled is true, RMHAProtocolService#serviceInit calls HAUtil.setAllRpcAddresses. If the configuration values are null, it just throws IllegalArgumentException. It's messy to analyse which keys are null, so we should handle it and log the name of keys which are null. A current log dump is as follows: {code} 2013-10-15 06:24:53,431 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2013-10-15 06:24:54,203 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: java.lang.IllegalArgumentException: Property value must not be null java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:816) at org.apache.hadoop.conf.Configuration.set(Configuration.java:798) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:100) at
[jira] [Commented] (YARN-1303) Allow multiple commands separating with ;
[ https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797048#comment-13797048 ] Hadoop QA commented on YARN-1303: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608752/YARN-1303.4.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell: org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2189//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2189//console This message is automatically generated. Allow multiple commands separating with ; - Key: YARN-1303 URL: https://issues.apache.org/jira/browse/YARN-1303 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1303.1.patch, YARN-1303.2.patch, YARN-1303.3.patch, YARN-1303.3.patch, YARN-1303.4.patch In shell, we can do ls; ls to run 2 commands at once. In distributed shell, this is not working. We should improve to allow this to occur. There are practical use cases that I know of to run multiple commands or to set environment variables before a command. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1303) Allow multiple commands separating with ;
[ https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-1303: Attachment: YARN-1303.4.patch Allow multiple commands separating with ; - Key: YARN-1303 URL: https://issues.apache.org/jira/browse/YARN-1303 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1303.1.patch, YARN-1303.2.patch, YARN-1303.3.patch, YARN-1303.3.patch, YARN-1303.4.patch, YARN-1303.4.patch In shell, we can do ls; ls to run 2 commands at once. In distributed shell, this is not working. We should improve to allow this to occur. There are practical use cases that I know of to run multiple commands or to set environment variables before a command. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1305) RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException
[ https://issues.apache.org/jira/browse/YARN-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797071#comment-13797071 ] Hadoop QA commented on YARN-1305: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608757/YARN-1305.3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2190//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2190//console This message is automatically generated. RMHAProtocolService#serviceInit should handle HAUtil's IllegalArgumentException --- Key: YARN-1305 URL: https://issues.apache.org/jira/browse/YARN-1305 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.2.1 Reporter: Tsuyoshi OZAWA Assignee: Tsuyoshi OZAWA Labels: ha Attachments: YARN-1305.1.patch, YARN-1305.2.patch, YARN-1305.3.patch When yarn.resourcemanager.ha.enabled is true, RMHAProtocolService#serviceInit calls HAUtil.setAllRpcAddresses. If the configuration values are null, it just throws IllegalArgumentException. It's messy to analyse which keys are null, so we should handle it and log the name of keys which are null. A current log dump is as follows: {code} 2013-10-15 06:24:53,431 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2013-10-15 06:24:54,203 INFO org.apache.hadoop.service.AbstractService: Service RMHAProtocolService failed in state INITED; cause: java.lang.IllegalArgumentException: Property value must not be null java.lang.IllegalArgumentException: Property value must not be null at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88) at org.apache.hadoop.conf.Configuration.set(Configuration.java:816) at org.apache.hadoop.conf.Configuration.set(Configuration.java:798) at org.apache.hadoop.yarn.conf.HAUtil.setConfValue(HAUtil.java:100) at org.apache.hadoop.yarn.conf.HAUtil.setAllRpcAddresses(HAUtil.java:105) at org.apache.hadoop.yarn.server.resourcemanager.RMHAProtocolService.serviceInit(RMHAProtocolService.java:60) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:187) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:940) {code} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1303) Allow multiple commands separating with ;
[ https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797085#comment-13797085 ] Hadoop QA commented on YARN-1303: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608763/YARN-1303.4.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell: org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2191//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2191//console This message is automatically generated. Allow multiple commands separating with ; - Key: YARN-1303 URL: https://issues.apache.org/jira/browse/YARN-1303 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1303.1.patch, YARN-1303.2.patch, YARN-1303.3.patch, YARN-1303.3.patch, YARN-1303.4.patch, YARN-1303.4.patch In shell, we can do ls; ls to run 2 commands at once. In distributed shell, this is not working. We should improve to allow this to occur. There are practical use cases that I know of to run multiple commands or to set environment variables before a command. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1288) Make Fair Scheduler ACLs more user friendly
[ https://issues.apache.org/jira/browse/YARN-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797112#comment-13797112 ] Alejandro Abdelnur commented on YARN-1288: -- * Queue.java: ** false change line 46 ** unused import line 27 * FSLeafQueue.java: ** unused imports lines 27,34 * FSQueue.java: ** unused imports lines 23,24,29 * QueueManager.java: ** false changes line 75,296,414,417 ** NOONE_ACL constant should be NO_ONE_ACL ** default getQueueAcl() behavior has not changed, correct? The following comment is removed by the patch, I think we should have it in the getQueueAcl() method: {code} // Root queue should have empty ACLs. As a queue's ACL is the union of // its ACL and all its parents' ACLs, setting the roots' to empty will // neither allow nor prohibit more access to its children. {code} Make Fair Scheduler ACLs more user friendly --- Key: YARN-1288 URL: https://issues.apache.org/jira/browse/YARN-1288 Project: Hadoop YARN Issue Type: Bug Components: scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1288.patch The Fair Scheduler currently defaults the root queue's acl to empty and all other queues' acl to *. Now that YARN-1258 enables configuring the root queue, we should reverse this. This will also bring the Fair Scheduler in line with the Capacity Scheduler. We should also not trim the acl strings, which makes it impossible to only specify groups in an acl. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1303) Allow multiple commands separating with ;
[ https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-1303: Attachment: YARN-1303.5.patch Let us still keep the original solution. Just asking the clients to create the shell script when they want to do multiple commands and command pipeline. Allow multiple commands separating with ; - Key: YARN-1303 URL: https://issues.apache.org/jira/browse/YARN-1303 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1303.1.patch, YARN-1303.2.patch, YARN-1303.3.patch, YARN-1303.3.patch, YARN-1303.4.patch, YARN-1303.4.patch, YARN-1303.5.patch In shell, we can do ls; ls to run 2 commands at once. In distributed shell, this is not working. We should improve to allow this to occur. There are practical use cases that I know of to run multiple commands or to set environment variables before a command. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1307) Rethink znode structure for RM HA
[ https://issues.apache.org/jira/browse/YARN-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797118#comment-13797118 ] Tsuyoshi OZAWA commented on YARN-1307: -- I'm creating summary of proposal in YARN-1222 and YARN-659. I'll share you soon. Rethink znode structure for RM HA - Key: YARN-1307 URL: https://issues.apache.org/jira/browse/YARN-1307 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Tsuyoshi OZAWA Rethink for znode structure for RM HA is proposed in some JIRAs(YARN-659, YARN-1222). The motivation of this JIRA is quoted from Bikas' comment in YARN-1222: {quote} We should move to creating a node hierarchy for apps such that all znodes for an app are stored under an app znode instead of the app root znode. This will help in removeApplication and also in scaling better on ZK. The earlier code was written this way to ensure create/delete happens under a root znode for fencing. But given that we have moved to multi-operations globally, this isnt required anymore. {quote} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1241) In Fair Scheduler maxRunningApps does not work for non-leaf queues
[ https://issues.apache.org/jira/browse/YARN-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797138#comment-13797138 ] Alejandro Abdelnur commented on YARN-1241: -- * FairScheduler.java ** false changes lines 192, 620, 709, 713 * MaxRunningAppsEnforcer.java ** Wouldn't make sense to make all the methods to take an AppSchedulable? * FSLeafQueue.java ** instead keeping track of non runnable apps here, why not have a separate data structure where to park the non runnable apps? Then the queues would never see a non runnable app * Unless I'm missing something we are tracking runnable apps only, are we tracking non-runnable apps? If not, it would be handy to have an idea of the demand of a queue In Fair Scheduler maxRunningApps does not work for non-leaf queues -- Key: YARN-1241 URL: https://issues.apache.org/jira/browse/YARN-1241 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1241-1.patch, YARN-1241-2.patch, YARN-1241-3.patch, YARN-1241-4.patch, YARN-1241-5.patch, YARN-1241.patch Setting the maxRunningApps property on a parent queue should make it that the sum of apps in all subqueues can't exceed it -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1139) [Umbrella] Convert all RM components to Services
[ https://issues.apache.org/jira/browse/YARN-1139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797156#comment-13797156 ] Zhijie Shen commented on YARN-1139: --- When converting the components into services, one thing I think we may need to take care is that the exception will be isolated by the service model. For example, {code} try { this.scheduler.reinitialize(conf, this.rmContext); } catch (IOException ioe) { throw new RuntimeException(Failed to initialize scheduler, ioe); } {code} If the scheduler turns into a service, RM cannot catch the exception like that. Previously, we also met the problem of the composite service cannot directly receive the exception that is thrown by its child. [Umbrella] Convert all RM components to Services Key: YARN-1139 URL: https://issues.apache.org/jira/browse/YARN-1139 Project: Hadoop YARN Issue Type: Improvement Components: resourcemanager Affects Versions: 2.1.0-beta Reporter: Karthik Kambatla Assignee: Tsuyoshi OZAWA Some of the RM components - state store, scheduler etc. are not services. Converting them to services goes well with the Always On and Active service separation proposed on YARN-1098. Given that some of them already have start(), stop() methods, it should not be too hard to convert them to services. That would also be a cleaner way of addressing YARN-1125. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-934) HistoryStorage writer interface for Application History Server
[ https://issues.apache.org/jira/browse/YARN-934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797176#comment-13797176 ] Mayank Bansal commented on YARN-934: +1 Looks good HistoryStorage writer interface for Application History Server -- Key: YARN-934 URL: https://issues.apache.org/jira/browse/YARN-934 Project: Hadoop YARN Issue Type: Sub-task Reporter: Zhijie Shen Assignee: Zhijie Shen Fix For: YARN-321 Attachments: YARN-934.1.patch, YARN-934.2.patch, YARN-934.3.patch, YARN-934.4.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1303) Allow multiple commands separating with ;
[ https://issues.apache.org/jira/browse/YARN-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797178#comment-13797178 ] Hadoop QA commented on YARN-1303: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608775/YARN-1303.5.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2192//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2192//console This message is automatically generated. Allow multiple commands separating with ; - Key: YARN-1303 URL: https://issues.apache.org/jira/browse/YARN-1303 Project: Hadoop YARN Issue Type: Improvement Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1303.1.patch, YARN-1303.2.patch, YARN-1303.3.patch, YARN-1303.3.patch, YARN-1303.4.patch, YARN-1303.4.patch, YARN-1303.5.patch In shell, we can do ls; ls to run 2 commands at once. In distributed shell, this is not working. We should improve to allow this to occur. There are practical use cases that I know of to run multiple commands or to set environment variables before a command. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong reassigned YARN-1314: --- Assignee: Xuan Gong Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1139) [Umbrella] Convert all RM components to Services
[ https://issues.apache.org/jira/browse/YARN-1139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797242#comment-13797242 ] Tsuyoshi OZAWA commented on YARN-1139: -- Thank you for the advice, [~zjshen]. I've checked the AbstractService code, and I've gotten we need to convert all exceptions into ServiceStateException - subclass of RuntimeException - as you described. I'll check and update a patch for YARN-1172 based on your advice. [Umbrella] Convert all RM components to Services Key: YARN-1139 URL: https://issues.apache.org/jira/browse/YARN-1139 Project: Hadoop YARN Issue Type: Improvement Components: resourcemanager Affects Versions: 2.1.0-beta Reporter: Karthik Kambatla Assignee: Tsuyoshi OZAWA Some of the RM components - state store, scheduler etc. are not services. Converting them to services goes well with the Always On and Active service separation proposed on YARN-1098. Given that some of them already have start(), stop() methods, it should not be too hard to convert them to services. That would also be a cleaner way of addressing YARN-1125. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Assigned] (YARN-1315) TestQueueACLs should also test FairScheduler
[ https://issues.apache.org/jira/browse/YARN-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandy Ryza reassigned YARN-1315: Assignee: Sandy Ryza TestQueueACLs should also test FairScheduler Key: YARN-1315 URL: https://issues.apache.org/jira/browse/YARN-1315 Project: Hadoop YARN Issue Type: Bug Components: resourcemanager, scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza Assignee: Sandy Ryza -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (YARN-1315) TestQueueACLs should also test FairScheduler
Sandy Ryza created YARN-1315: Summary: TestQueueACLs should also test FairScheduler Key: YARN-1315 URL: https://issues.apache.org/jira/browse/YARN-1315 Project: Hadoop YARN Issue Type: Bug Components: resourcemanager, scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1288) Make Fair Scheduler ACLs more user friendly
[ https://issues.apache.org/jira/browse/YARN-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797308#comment-13797308 ] Sandy Ryza commented on YARN-1288: -- Uploaded a new patch. Addressed false changes and unused imports. bq. NOONE_ACL constant should be NO_ONE_ACL Changed this to EVERYBODY_ACL and NOBODY_ACL. bq. default getQueueAcl() behavior has not changed, correct? The behavior has changed. Added a comment to the getQueueACL method that explains the behavior. Make Fair Scheduler ACLs more user friendly --- Key: YARN-1288 URL: https://issues.apache.org/jira/browse/YARN-1288 Project: Hadoop YARN Issue Type: Bug Components: scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1288-1.patch, YARN-1288.patch The Fair Scheduler currently defaults the root queue's acl to empty and all other queues' acl to *. Now that YARN-1258 enables configuring the root queue, we should reverse this. This will also bring the Fair Scheduler in line with the Capacity Scheduler. We should also not trim the acl strings, which makes it impossible to only specify groups in an acl. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1288) Make Fair Scheduler ACLs more user friendly
[ https://issues.apache.org/jira/browse/YARN-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandy Ryza updated YARN-1288: - Attachment: YARN-1288-1.patch Make Fair Scheduler ACLs more user friendly --- Key: YARN-1288 URL: https://issues.apache.org/jira/browse/YARN-1288 Project: Hadoop YARN Issue Type: Bug Components: scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1288-1.patch, YARN-1288.patch The Fair Scheduler currently defaults the root queue's acl to empty and all other queues' acl to *. Now that YARN-1258 enables configuring the root queue, we should reverse this. This will also bring the Fair Scheduler in line with the Capacity Scheduler. We should also not trim the acl strings, which makes it impossible to only specify groups in an acl. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1307) Rethink znode structure for RM HA
[ https://issues.apache.org/jira/browse/YARN-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797346#comment-13797346 ] Tsuyoshi OZAWA commented on YARN-1307: -- This is the summary of current znode structure(Before) and proposed znode structure(After). Could you review these? Before: {code} ROOT_DIR_PATH |--- RM_APP_ROOT |- (#ApplicationId) /* update when YARN application starts. */ |- (#ApplicationAttemptId) /* update when containers are allocated. */ |--- RM_DT_SECRET_MANAGER_ROOT |- RMDelegationToken_(SequenceNumber) /* update when containers are assigned */ |- RMDTSequenceNumber_(SequenceNumber)) /* update when containers are assigned. A global variable. */ {code} After(Our proposal): {code} ROOT_DIR_PATH |--- RM_APP_ROOT | |- (#ApplicationId1) | ||- ATTEMPT_IDS | |||- (#ApplicationAttemptIds) | ||- TOKENS | | |- RMDelegationToken_(#SequenceNumber) | |- (#ApplicationId2) | ||- ATTEMPT_IDS | |||- (#ApplicationAttemptIds) | ||- TOKENS | | |- RMDelegationToken_(#SequenceNumber) | |--- RM_DT_SECRET_MANAGER_ROOT |- RMDTSequenceNumber_(SequenceNumber) {code} Rethink znode structure for RM HA - Key: YARN-1307 URL: https://issues.apache.org/jira/browse/YARN-1307 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Tsuyoshi OZAWA Rethink for znode structure for RM HA is proposed in some JIRAs(YARN-659, YARN-1222). The motivation of this JIRA is quoted from Bikas' comment in YARN-1222: {quote} We should move to creating a node hierarchy for apps such that all znodes for an app are stored under an app znode instead of the app root znode. This will help in removeApplication and also in scaling better on ZK. The earlier code was written this way to ensure create/delete happens under a root znode for fencing. But given that we have moved to multi-operations globally, this isnt required anymore. {quote} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1288) Make Fair Scheduler ACLs more user friendly
[ https://issues.apache.org/jira/browse/YARN-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797353#comment-13797353 ] Hadoop QA commented on YARN-1288: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608807/YARN-1288-1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2193//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2193//console This message is automatically generated. Make Fair Scheduler ACLs more user friendly --- Key: YARN-1288 URL: https://issues.apache.org/jira/browse/YARN-1288 Project: Hadoop YARN Issue Type: Bug Components: scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1288-1.patch, YARN-1288.patch The Fair Scheduler currently defaults the root queue's acl to empty and all other queues' acl to *. Now that YARN-1258 enables configuring the root queue, we should reverse this. This will also bring the Fair Scheduler in line with the Capacity Scheduler. We should also not trim the acl strings, which makes it impossible to only specify groups in an acl. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1068) Add admin support for HA operations
[ https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797357#comment-13797357 ] Karthik Kambatla commented on YARN-1068: [~vinodkv], the inconsistencies with the rest of the YARN code come primarily from our attempt to re-use Common code. Our attempt has been to use the Common code as much as possible. Do you think we should keep it independent? Add admin support for HA operations --- Key: YARN-1068 URL: https://issues.apache.org/jira/browse/YARN-1068 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.1.0-beta Reporter: Karthik Kambatla Assignee: Karthik Kambatla Labels: ha Attachments: yarn-1068-10.patch, yarn-1068-11.patch, yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, yarn-1068-4.patch, yarn-1068-5.patch, yarn-1068-6.patch, yarn-1068-7.patch, yarn-1068-8.patch, yarn-1068-9.patch, yarn-1068-prelim.patch Support HA admin operations to facilitate transitioning the RM to Active and Standby states. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-659) RMStateStore's removeApplication APIs should just take an applicationId
[ https://issues.apache.org/jira/browse/YARN-659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797347#comment-13797347 ] Tsuyoshi OZAWA commented on YARN-659: - Oh, I posted to wrong JIRA. Repost this to correct one(YARN-1307). I apologize for this. Let's have discussion in YARN-1307. RMStateStore's removeApplication APIs should just take an applicationId --- Key: YARN-659 URL: https://issues.apache.org/jira/browse/YARN-659 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Vinod Kumar Vavilapalli Assignee: Tsuyoshi OZAWA There is no need to give in the whole state for removal - just an ID should be enough when an app finishes. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1068) Add admin support for HA operations
[ https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797370#comment-13797370 ] Vinod Kumar Vavilapalli commented on YARN-1068: --- Trying to use common code is forcing us to start a new RPC-server? I don't quite follow it, more details may be? Add admin support for HA operations --- Key: YARN-1068 URL: https://issues.apache.org/jira/browse/YARN-1068 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.1.0-beta Reporter: Karthik Kambatla Assignee: Karthik Kambatla Labels: ha Attachments: yarn-1068-10.patch, yarn-1068-11.patch, yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, yarn-1068-4.patch, yarn-1068-5.patch, yarn-1068-6.patch, yarn-1068-7.patch, yarn-1068-8.patch, yarn-1068-9.patch, yarn-1068-prelim.patch Support HA admin operations to facilitate transitioning the RM to Active and Standby states. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-934) HistoryStorage writer interface for Application History Server
[ https://issues.apache.org/jira/browse/YARN-934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhijie Shen updated YARN-934: - Attachment: YARN-934.5.patch bq. Leave ApplicationHistoryStore as an interface, and make it extend Service. Fixed it in the newest patch bq. Please fix the Memory based History store and the corresponding tests too. Will reopen YARN-956 to update the in-memeory implementation against the new writer interface HistoryStorage writer interface for Application History Server -- Key: YARN-934 URL: https://issues.apache.org/jira/browse/YARN-934 Project: Hadoop YARN Issue Type: Sub-task Reporter: Zhijie Shen Assignee: Zhijie Shen Fix For: YARN-321 Attachments: YARN-934.1.patch, YARN-934.2.patch, YARN-934.3.patch, YARN-934.4.patch, YARN-934.5.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Reopened] (YARN-956) [YARN-321] Add a testable in-memory HistoryStorage
[ https://issues.apache.org/jira/browse/YARN-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhijie Shen reopened YARN-956: -- Assignee: Zhijie Shen (was: Mayank Bansal) Need to update the in-memory implementation given the writer interface change. [YARN-321] Add a testable in-memory HistoryStorage --- Key: YARN-956 URL: https://issues.apache.org/jira/browse/YARN-956 Project: Hadoop YARN Issue Type: Sub-task Reporter: Vinod Kumar Vavilapalli Assignee: Zhijie Shen Fix For: YARN-321 Attachments: YARN-956-1.patch, YARN-956-2.patch, YARN-956-3.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1315) TestQueueACLs should also test FairScheduler
[ https://issues.apache.org/jira/browse/YARN-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandy Ryza updated YARN-1315: - Attachment: YARN-1315.patch TestQueueACLs should also test FairScheduler Key: YARN-1315 URL: https://issues.apache.org/jira/browse/YARN-1315 Project: Hadoop YARN Issue Type: Bug Components: resourcemanager, scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1315.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-311) Dynamic node resource configuration: core scheduler changes
[ https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated YARN-311: Attachment: YARN-311-v8.patch Sync up patch with latest trunk. Dynamic node resource configuration: core scheduler changes --- Key: YARN-311 URL: https://issues.apache.org/jira/browse/YARN-311 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager, scheduler Reporter: Junping Du Assignee: Junping Du Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch, YARN-311-v6.1.patch, YARN-311-v6.2.patch, YARN-311-v6.patch, YARN-311-v7.patch, YARN-311-v8.patch As the first step, we go for resource change on RM side and expose admin APIs (admin protocol, CLI, REST and JMX API) later. In this jira, we will only contain changes in scheduler. The flow to update node's resource and awareness in resource scheduling is: 1. Resource update is through admin API to RM and take effect on RMNodeImpl. 2. When next NM heartbeat for updating status comes, the RMNode's resource change will be aware and the delta resource is added to schedulerNode's availableResource before actual scheduling happens. 3. Scheduler do resource allocation according to new availableResource in SchedulerNode. For more design details, please refer proposal and discussions in parent JIRA: YARN-291. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-891) Store completed application information in RM state store
[ https://issues.apache.org/jira/browse/YARN-891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-891: - Attachment: YARN-891.patch upload a preliminary patch: - Add a new field finalState in ApplicationState and ApplicationAttemptState in RMStateStore. - All app transitions go through RMAppFinalStateSavingTransition waiting for final state to be stored before reaching terminal state. - All atttempt transitions go through AttemptFinalStateSavingTransition waiting for final attempt state to be stored before reaching terminal state. - corresponding PB changes. To do: - Fix TestRMRestart failures. - Separate state store clean up thread. - manual cluster test Store completed application information in RM state store - Key: YARN-891 URL: https://issues.apache.org/jira/browse/YARN-891 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Bikas Saha Assignee: Jian He Attachments: YARN-891.patch Add information like exit status etc for the completed attempt. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1068) Add admin support for HA operations
[ https://issues.apache.org/jira/browse/YARN-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797407#comment-13797407 ] Karthik Kambatla commented on YARN-1068: Sorry, should have been clearer - was referring primarily to the PB interface use. Let me address each point separately. bq. Let's try to avoid adding one more server. We added AdminService separately from client-service only for QOS sake for admin operations. HAAdminService should be listening on the same port as all other operations. IIUC, the suggestion is to use the RPC server from AdminService. AdminService currently is an Active service and not an Always-On service, so doesn't start until the RM transitions to Active. Moving the AdminService to Always-On requires defining the semantics when the RM is Standby. bq. Following that, we should use only one rmadmin CLI for fail-over commands too. To do this, we need to have RMAdminCLI extend HAAdmin, and augment the run() method to call super.run() when applicable, and the usage needs to be augmented to include the HAAdmin usage. bq. RMHAProtocolService: We don't directly use PB interfaces in YARN. Let's not change that here - use YarnRPC to create servers. YARN expects the actual PB/PBImpl files to be at a particular location, and can't find the corresponding files when using HAServiceProtocol from common. Hence, had to use PB interfaces. bq. No tests specifically for the new code added here? The patch primarily adds command line support for HA transitions. Have tested this manually several times on a real cluster. Add admin support for HA operations --- Key: YARN-1068 URL: https://issues.apache.org/jira/browse/YARN-1068 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.1.0-beta Reporter: Karthik Kambatla Assignee: Karthik Kambatla Labels: ha Attachments: yarn-1068-10.patch, yarn-1068-11.patch, yarn-1068-1.patch, yarn-1068-2.patch, yarn-1068-3.patch, yarn-1068-4.patch, yarn-1068-5.patch, yarn-1068-6.patch, yarn-1068-7.patch, yarn-1068-8.patch, yarn-1068-9.patch, yarn-1068-prelim.patch Support HA admin operations to facilitate transitioning the RM to Active and Standby states. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-1314: Attachment: YARN-1314.1.patch Let --shell_args accept multiple arguments separated by empty space Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797423#comment-13797423 ] Tassapol Athiapinya commented on YARN-1314: --- [~xgong] can you please give an example of command to run multiple arguments? Ideally each argument should allow spaces in between also. As an example, similar to shell command, we can do: cp my file 1.txt my file 2.txt This can be complex by allowing \ inside each argument in addition to having spaces. cp my\file 1.txt my\file 2.txt Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1172) Convert *SecretManagers in the RM to services
[ https://issues.apache.org/jira/browse/YARN-1172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797434#comment-13797434 ] Tsuyoshi OZAWA commented on YARN-1172: -- I'm updating a patch to override serviceInit/serviceStart/serviceStop and use them to work *SecretManagers as Service. Convert *SecretManagers in the RM to services - Key: YARN-1172 URL: https://issues.apache.org/jira/browse/YARN-1172 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.1.0-beta Reporter: Karthik Kambatla Assignee: Tsuyoshi OZAWA Attachments: YARN-1172.1.patch, YARN-1172.2.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797433#comment-13797433 ] Xuan Gong commented on YARN-1314: - [~tassapola] we can do --shell_command echo --shell_args a b c The (a b c) will be treated as three different arguments Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1241) In Fair Scheduler maxRunningApps does not work for non-leaf queues
[ https://issues.apache.org/jira/browse/YARN-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandy Ryza updated YARN-1241: - Attachment: YARN-1241-6.patch In Fair Scheduler maxRunningApps does not work for non-leaf queues -- Key: YARN-1241 URL: https://issues.apache.org/jira/browse/YARN-1241 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1241-1.patch, YARN-1241-2.patch, YARN-1241-3.patch, YARN-1241-4.patch, YARN-1241-5.patch, YARN-1241-6.patch, YARN-1241.patch Setting the maxRunningApps property on a parent queue should make it that the sum of apps in all subqueues can't exceed it -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797448#comment-13797448 ] Tassapol Athiapinya commented on YARN-1314: --- Can each argument be complex as I describe above? It is matching regular shell behavior. Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1241) In Fair Scheduler maxRunningApps does not work for non-leaf queues
[ https://issues.apache.org/jira/browse/YARN-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797442#comment-13797442 ] Sandy Ryza commented on YARN-1241: -- Uploaded a new patch that * Removes the false change lines * Make all the methods in MaxRunningAppsEnforcer take an FSSchedulerApp I think we should keep tracking non runnable apps in FSLeafQueue because they're used there for both demand calculation and displaying the number of running vs. pending apps in the web UI. In Fair Scheduler maxRunningApps does not work for non-leaf queues -- Key: YARN-1241 URL: https://issues.apache.org/jira/browse/YARN-1241 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1241-1.patch, YARN-1241-2.patch, YARN-1241-3.patch, YARN-1241-4.patch, YARN-1241-5.patch, YARN-1241-6.patch, YARN-1241.patch Setting the maxRunningApps property on a parent queue should make it that the sum of apps in all subqueues can't exceed it -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1123) [YARN-321] Adding ContainerReport and Protobuf implementation
[ https://issues.apache.org/jira/browse/YARN-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797450#comment-13797450 ] Mayank Bansal commented on YARN-1123: - [~zjshen] I think we should keep ContainerStatus and add exit status as well for clarity. We may want to add more states in future and in then we don't want to change here again. Thoughts? Thanks, Mayank [YARN-321] Adding ContainerReport and Protobuf implementation - Key: YARN-1123 URL: https://issues.apache.org/jira/browse/YARN-1123 Project: Hadoop YARN Issue Type: Sub-task Reporter: Zhijie Shen Assignee: Mayank Bansal Attachments: YARN-1123-1.patch, YARN-1123-2.patch Like YARN-978, we need some client-oriented class to expose the container history info. Neither Container nor RMContainer is the right one. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-947) Defining the history data classes for the implementation of the reading/writing interface
[ https://issues.apache.org/jira/browse/YARN-947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797452#comment-13797452 ] Mayank Bansal commented on YARN-947: Over all looks good, however for point 5 please go through this comment https://issues.apache.org/jira/browse/YARN-1123?focusedCommentId=13797450page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13797450 Defining the history data classes for the implementation of the reading/writing interface - Key: YARN-947 URL: https://issues.apache.org/jira/browse/YARN-947 Project: Hadoop YARN Issue Type: Sub-task Reporter: Zhijie Shen Assignee: Zhijie Shen Fix For: YARN-321 Attachments: YARN-947.1.patch, YARN-947.2.patch, YARN-947.3.patch We need to define the history data classes have the exact fields to be stored. Therefore, all the implementations don't need to have the duplicate logic to exact the required information from RMApp, RMAppAttempt and RMContainer. We use protobuf to define these classes, such that they can be ser/des to/from bytes, which are easier for persistence. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-934) HistoryStorage writer interface for Application History Server
[ https://issues.apache.org/jira/browse/YARN-934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797456#comment-13797456 ] Hadoop QA commented on YARN-934: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608822/YARN-934.5.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2197//console This message is automatically generated. HistoryStorage writer interface for Application History Server -- Key: YARN-934 URL: https://issues.apache.org/jira/browse/YARN-934 Project: Hadoop YARN Issue Type: Sub-task Reporter: Zhijie Shen Assignee: Zhijie Shen Fix For: YARN-321 Attachments: YARN-934.1.patch, YARN-934.2.patch, YARN-934.3.patch, YARN-934.4.patch, YARN-934.5.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-1314: Attachment: YARN-1314.1.patch Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch, YARN-1314.1.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1315) TestQueueACLs should also test FairScheduler
[ https://issues.apache.org/jira/browse/YARN-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797462#comment-13797462 ] Hadoop QA commented on YARN-1315: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608823/YARN-1315.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerQueueACLs {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2196//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2196//console This message is automatically generated. TestQueueACLs should also test FairScheduler Key: YARN-1315 URL: https://issues.apache.org/jira/browse/YARN-1315 Project: Hadoop YARN Issue Type: Bug Components: resourcemanager, scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1315.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-311) Dynamic node resource configuration: core scheduler changes
[ https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797463#comment-13797463 ] Hadoop QA commented on YARN-311: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608826/YARN-311-v8.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-tools/hadoop-sls hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2195//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2195//console This message is automatically generated. Dynamic node resource configuration: core scheduler changes --- Key: YARN-311 URL: https://issues.apache.org/jira/browse/YARN-311 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager, scheduler Reporter: Junping Du Assignee: Junping Du Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch, YARN-311-v6.1.patch, YARN-311-v6.2.patch, YARN-311-v6.patch, YARN-311-v7.patch, YARN-311-v8.patch As the first step, we go for resource change on RM side and expose admin APIs (admin protocol, CLI, REST and JMX API) later. In this jira, we will only contain changes in scheduler. The flow to update node's resource and awareness in resource scheduling is: 1. Resource update is through admin API to RM and take effect on RMNodeImpl. 2. When next NM heartbeat for updating status comes, the RMNode's resource change will be aware and the delta resource is added to schedulerNode's availableResource before actual scheduling happens. 3. Scheduler do resource allocation according to new availableResource in SchedulerNode. For more design details, please refer proposal and discussions in parent JIRA: YARN-291. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1307) Rethink znode structure for RM HA
[ https://issues.apache.org/jira/browse/YARN-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797470#comment-13797470 ] Vinod Kumar Vavilapalli commented on YARN-1307: --- Just read through the remaining JIRAs to catch up. The proposed changes look good to me. Assign this to yourselves if you are working on a patch? Rethink znode structure for RM HA - Key: YARN-1307 URL: https://issues.apache.org/jira/browse/YARN-1307 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Tsuyoshi OZAWA Rethink for znode structure for RM HA is proposed in some JIRAs(YARN-659, YARN-1222). The motivation of this JIRA is quoted from Bikas' comment in YARN-1222: {quote} We should move to creating a node hierarchy for apps such that all znodes for an app are stored under an app znode instead of the app root znode. This will help in removeApplication and also in scaling better on ZK. The earlier code was written this way to ensure create/delete happens under a root znode for fencing. But given that we have moved to multi-operations globally, this isnt required anymore. {quote} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1222) Make improvements in ZKRMStateStore for fencing
[ https://issues.apache.org/jira/browse/YARN-1222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797471#comment-13797471 ] Vinod Kumar Vavilapalli commented on YARN-1222: --- I think this should be blocked by YARN-1307, linking accordingly. Please mention why and change it if that isn't so. Make improvements in ZKRMStateStore for fencing --- Key: YARN-1222 URL: https://issues.apache.org/jira/browse/YARN-1222 Project: Hadoop YARN Issue Type: Sub-task Reporter: Bikas Saha Assignee: Karthik Kambatla Attachments: yarn-1222-1.patch Using multi-operations for every ZK interaction. In every operation, automatically creating/deleting a lock znode that is the child of the root znode. This is to achieve fencing by modifying the create/delete permissions on the root znode. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1222) Make improvements in ZKRMStateStore for fencing
[ https://issues.apache.org/jira/browse/YARN-1222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797473#comment-13797473 ] Bikas Saha commented on YARN-1222: -- This is related but shouldnt be blocked. This jira is to ensure that we always use ZK-multi operations when changing the ZK data and each multi-operation includes a lock node create/delete operation. Make improvements in ZKRMStateStore for fencing --- Key: YARN-1222 URL: https://issues.apache.org/jira/browse/YARN-1222 Project: Hadoop YARN Issue Type: Sub-task Reporter: Bikas Saha Assignee: Karthik Kambatla Attachments: yarn-1222-1.patch Using multi-operations for every ZK interaction. In every operation, automatically creating/deleting a lock znode that is the child of the root znode. This is to achieve fencing by modifying the create/delete permissions on the root znode. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1241) In Fair Scheduler maxRunningApps does not work for non-leaf queues
[ https://issues.apache.org/jira/browse/YARN-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797474#comment-13797474 ] Hadoop QA commented on YARN-1241: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608837/YARN-1241-6.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 5 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2198//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2198//console This message is automatically generated. In Fair Scheduler maxRunningApps does not work for non-leaf queues -- Key: YARN-1241 URL: https://issues.apache.org/jira/browse/YARN-1241 Project: Hadoop YARN Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1241-1.patch, YARN-1241-2.patch, YARN-1241-3.patch, YARN-1241-4.patch, YARN-1241-5.patch, YARN-1241-6.patch, YARN-1241.patch Setting the maxRunningApps property on a parent queue should make it that the sum of apps in all subqueues can't exceed it -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1222) Make improvements in ZKRMStateStore for fencing
[ https://issues.apache.org/jira/browse/YARN-1222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797483#comment-13797483 ] Vinod Kumar Vavilapalli commented on YARN-1222: --- I meant blocking w.r.t the implementation and the patch conflicts more than the approach. If we are changing the node structure on ZK via YARN-1307, this patch will be affected, no? Make improvements in ZKRMStateStore for fencing --- Key: YARN-1222 URL: https://issues.apache.org/jira/browse/YARN-1222 Project: Hadoop YARN Issue Type: Sub-task Reporter: Bikas Saha Assignee: Karthik Kambatla Attachments: yarn-1222-1.patch Using multi-operations for every ZK interaction. In every operation, automatically creating/deleting a lock znode that is the child of the root znode. This is to achieve fencing by modifying the create/delete permissions on the root znode. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797546#comment-13797546 ] Hadoop QA commented on YARN-1314: - {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608843/YARN-1314.1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell: org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2199//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2199//console This message is automatically generated. Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch, YARN-1314.1.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1307) Rethink znode structure for RM HA
[ https://issues.apache.org/jira/browse/YARN-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797548#comment-13797548 ] Jian He commented on YARN-1307: --- RMDelegationToken is not application specific, user can also explicitly say getDelegationToken, should not be stored along with app info One question is does ZK support 'directory' removal, meaning if the parent node gets deleted, will its child nodes also get deleted along? Rethink znode structure for RM HA - Key: YARN-1307 URL: https://issues.apache.org/jira/browse/YARN-1307 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Tsuyoshi OZAWA Rethink for znode structure for RM HA is proposed in some JIRAs(YARN-659, YARN-1222). The motivation of this JIRA is quoted from Bikas' comment in YARN-1222: {quote} We should move to creating a node hierarchy for apps such that all znodes for an app are stored under an app znode instead of the app root znode. This will help in removeApplication and also in scaling better on ZK. The earlier code was written this way to ensure create/delete happens under a root znode for fencing. But given that we have moved to multi-operations globally, this isnt required anymore. {quote} -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (YARN-1316) Add continuous scheduling and allow-undeclared-pools to Fair Scheduler documentation
Sandy Ryza created YARN-1316: Summary: Add continuous scheduling and allow-undeclared-pools to Fair Scheduler documentation Key: YARN-1316 URL: https://issues.apache.org/jira/browse/YARN-1316 Project: Hadoop YARN Issue Type: Improvement Components: documentation Reporter: Sandy Ryza Assignee: Sandy Ryza -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1185) FileSystemRMStateStore can leave partial files that prevent subsequent recovery
[ https://issues.apache.org/jira/browse/YARN-1185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797558#comment-13797558 ] Jian He commented on YARN-1185: --- The test case may also better to assert in the end that the corrupted application/attempt is not loaded back in RMState and doesn't exist in FileSystem FileSystemRMStateStore can leave partial files that prevent subsequent recovery --- Key: YARN-1185 URL: https://issues.apache.org/jira/browse/YARN-1185 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Affects Versions: 2.1.0-beta Reporter: Jason Lowe Assignee: Omkar Vinit Joshi Attachments: YARN-1185.1.patch FileSystemRMStateStore writes directly to the destination file when storing state. However if the RM were to crash in the middle of the write, the recovery method could encounter a partially-written file and either outright crash during recovery or silently load incomplete state. To avoid this, the data should be written to a temporary file and renamed to the destination file afterwards. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1288) Make Fair Scheduler ACLs more user friendly
[ https://issues.apache.org/jira/browse/YARN-1288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797568#comment-13797568 ] Alejandro Abdelnur commented on YARN-1288: -- patch LGTM. Before committing it ... * Would this be an incompatible change? If so, can the configuration be set to have the previous behavior? If so, that should be the default setting. * Documentation is missing. Make Fair Scheduler ACLs more user friendly --- Key: YARN-1288 URL: https://issues.apache.org/jira/browse/YARN-1288 Project: Hadoop YARN Issue Type: Bug Components: scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1288-1.patch, YARN-1288.patch The Fair Scheduler currently defaults the root queue's acl to empty and all other queues' acl to *. Now that YARN-1258 enables configuring the root queue, we should reverse this. This will also bring the Fair Scheduler in line with the Capacity Scheduler. We should also not trim the acl strings, which makes it impossible to only specify groups in an acl. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1315) TestQueueACLs should also test FairScheduler
[ https://issues.apache.org/jira/browse/YARN-1315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797569#comment-13797569 ] Alejandro Abdelnur commented on YARN-1315: -- Sandy told me off line this passes with YARN-1288 applied. +1 after a run of jenkins with YARN-1288 applied. TestQueueACLs should also test FairScheduler Key: YARN-1315 URL: https://issues.apache.org/jira/browse/YARN-1315 Project: Hadoop YARN Issue Type: Bug Components: resourcemanager, scheduler Affects Versions: 2.2.0 Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-1315.patch -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Updated] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuan Gong updated YARN-1314: Attachment: YARN-1314.2.patch Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch, YARN-1314.1.patch, YARN-1314.2.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797625#comment-13797625 ] Xuan Gong commented on YARN-1314: - bq.Can each argument be complex as I describe above? It is matching regular shell behavior. I think so. Try this command : {code} --shell_command echo --shell_args YARN HADOOP '\\$MYARG\\\' -shell_env MYARG=myval {code} We can get the expect output. As long as we give the valid argument, it will give the expected output. Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch, YARN-1314.1.patch, YARN-1314.2.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797631#comment-13797631 ] Tassapol Athiapinya commented on YARN-1314: --- Looking good. Thanks for the patch! Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch, YARN-1314.1.patch, YARN-1314.2.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Commented] (YARN-1314) Cannot pass more than 1 argument to shell command
[ https://issues.apache.org/jira/browse/YARN-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13797635#comment-13797635 ] Hadoop QA commented on YARN-1314: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12608881/YARN-1314.2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2200//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2200//console This message is automatically generated. Cannot pass more than 1 argument to shell command - Key: YARN-1314 URL: https://issues.apache.org/jira/browse/YARN-1314 Project: Hadoop YARN Issue Type: Bug Components: applications/distributed-shell Reporter: Tassapol Athiapinya Assignee: Xuan Gong Fix For: 2.2.1 Attachments: YARN-1314.1.patch, YARN-1314.1.patch, YARN-1314.2.patch Distributed shell cannot accept more than 1 parameters in argument parts. All of these commands are treated as 1 parameter: /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name is Teddy' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args ''My name' 'is Teddy'' /usr/bin/yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar distrubuted shell jar -shell_command echo -shell_args 'My name' 'is Teddy' -- This message was sent by Atlassian JIRA (v6.1#6144)