[jira] [Commented] (YARN-90) NodeManager should identify failed disks becoming good back again
[ https://issues.apache.org/jira/browse/YARN-90?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777082#comment-13777082 ] Ravi Prakash commented on YARN-90: -- Hi nijel! Welcome to the community and thanks for your contribution. A few comments: 1. Nit: Some lines are over 80 characters long. 2. numFailures is never incremented any more when the directory fails. Thus getNumFailures() would return the wrong result. Could you please also tell us how you tested the patch? There seem to be a lot of unit tests which use LocalDirsHandlerService. Did you run them all and ensure that they still all pass? Thanks again > NodeManager should identify failed disks becoming good back again > - > > Key: YARN-90 > URL: https://issues.apache.org/jira/browse/YARN-90 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Reporter: Ravi Gummadi > Attachments: YARN-90.patch > > > MAPREDUCE-3121 makes NodeManager identify disk failures. But once a disk goes > down, it is marked as failed forever. To reuse that disk (after it becomes > good), NodeManager needs restart. This JIRA is to improve NodeManager to > reuse good disks(which could be bad some time back). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-1215) Yarn URL should include userinfo
[ https://issues.apache.org/jira/browse/YARN-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777124#comment-13777124 ] Hadoop QA commented on YARN-1215: - {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12604945/YARN-1215-trunk.2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2011//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2011//console This message is automatically generated. > Yarn URL should include userinfo > > > Key: YARN-1215 > URL: https://issues.apache.org/jira/browse/YARN-1215 > Project: Hadoop YARN > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Chuan Liu >Assignee: Chuan Liu > Attachments: YARN-1215-trunk.2.patch, YARN-1215-trunk.patch > > > In the {{org.apache.hadoop.yarn.api.records.URL}} class, we don't have an > userinfo as part of the URL. When converting a {{java.net.URI}} object into > the YARN URL object in {{ConverterUtils.getYarnUrlFromURI()}} method, we will > set uri host as the url host. If the uri has a userinfo part, the userinfo is > discarded. This will lead to information loss if the original uri has the > userinfo, e.g. foo://username:passw...@example.com will be converted to > foo://example.com and username/password information is lost during the > conversion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (YARN-1028) Add FailoverProxyProvider like capability to RMProxy
[ https://issues.apache.org/jira/browse/YARN-1028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj K reassigned YARN-1028: --- Assignee: Karthik Kambatla (was: Devaraj K) > Add FailoverProxyProvider like capability to RMProxy > > > Key: YARN-1028 > URL: https://issues.apache.org/jira/browse/YARN-1028 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Bikas Saha >Assignee: Karthik Kambatla > > RMProxy layer currently abstracts RM discovery and implements it by looking > up service information from configuration. Motivated by HDFS and using > existing classes from Common, we can add failover proxy providers that may > provide RM discovery in extensible ways. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-311) Dynamic node resource configuration: core scheduler changes
[ https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated YARN-311: Attachment: YARN-311-v7.patch Sync up patch with Trunk in v7 patch. > Dynamic node resource configuration: core scheduler changes > --- > > Key: YARN-311 > URL: https://issues.apache.org/jira/browse/YARN-311 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager, scheduler >Reporter: Junping Du >Assignee: Junping Du > Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, > YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch, YARN-311-v6.1.patch, > YARN-311-v6.2.patch, YARN-311-v6.patch, YARN-311-v7.patch > > > As the first step, we go for resource change on RM side and expose admin APIs > (admin protocol, CLI, REST and JMX API) later. In this jira, we will only > contain changes in scheduler. > The flow to update node's resource and awareness in resource scheduling is: > 1. Resource update is through admin API to RM and take effect on RMNodeImpl. > 2. When next NM heartbeat for updating status comes, the RMNode's resource > change will be aware and the delta resource is added to schedulerNode's > availableResource before actual scheduling happens. > 3. Scheduler do resource allocation according to new availableResource in > SchedulerNode. > For more design details, please refer proposal and discussions in parent > JIRA: YARN-291. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (YARN-1235) Regulate the case of applicationType
Zhijie Shen created YARN-1235: - Summary: Regulate the case of applicationType Key: YARN-1235 URL: https://issues.apache.org/jira/browse/YARN-1235 Project: Hadoop YARN Issue Type: Bug Reporter: Zhijie Shen Assignee: Zhijie Shen In YARN-1001, when filtering applications, we ignore the case of the applicationType. However, RMClientService#getApplications doesn't. Moreover, it is not documented that ApplicationClientProtocol ignores the case of applicationType or not. IMHO, we need to do: 1. Modify RMClientService#getApplications to ignore the case of applicationType when filtering applications 2. Add javadoc in ApplicationClientProtocol#submitApplication and getApplications to say that applicationType is case insensitive 3. Probably, when submitApplication, we'd like to "normalize" the applicationType to the lower case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-311) Dynamic node resource configuration: core scheduler changes
[ https://issues.apache.org/jira/browse/YARN-311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777171#comment-13777171 ] Hadoop QA commented on YARN-311: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12604962/YARN-311-v7.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/2012//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2012//console This message is automatically generated. > Dynamic node resource configuration: core scheduler changes > --- > > Key: YARN-311 > URL: https://issues.apache.org/jira/browse/YARN-311 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager, scheduler >Reporter: Junping Du >Assignee: Junping Du > Attachments: YARN-311-v1.patch, YARN-311-v2.patch, YARN-311-v3.patch, > YARN-311-v4.patch, YARN-311-v4.patch, YARN-311-v5.patch, YARN-311-v6.1.patch, > YARN-311-v6.2.patch, YARN-311-v6.patch, YARN-311-v7.patch > > > As the first step, we go for resource change on RM side and expose admin APIs > (admin protocol, CLI, REST and JMX API) later. In this jira, we will only > contain changes in scheduler. > The flow to update node's resource and awareness in resource scheduling is: > 1. Resource update is through admin API to RM and take effect on RMNodeImpl. > 2. When next NM heartbeat for updating status comes, the RMNode's resource > change will be aware and the delta resource is added to schedulerNode's > availableResource before actual scheduling happens. > 3. Scheduler do resource allocation according to new availableResource in > SchedulerNode. > For more design details, please refer proposal and discussions in parent > JIRA: YARN-291. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira