[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues
[ https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004120#comment-16004120 ] Hadoop QA commented on YARN-6475: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in trunk has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 0 new + 70 unchanged - 18 fixed = 70 total (was 88) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 37s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6475 | | GITHUB PR | https://github.com/apache/hadoop/pull/218 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e994e58b9a31 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 06ffeb8 | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15889/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15889/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15889/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically
[jira] [Commented] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004101#comment-16004101 ] Hudson commented on YARN-6435: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11710 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11710/]) YARN-6435. [ATSv2] Can't retrieve more than 1000 versions of metrics in (haibochen: rev 461ee44d287b1fcf0bf15d662aebd3e6f2b83a72) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/application/ApplicationTable.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/entity/EntityTable.java > [ATSv2] Can't retrieve more than 1000 versions of metrics in time series > > > Key: YARN-6435 > URL: https://issues.apache.org/jira/browse/YARN-6435 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Vrushali C >Priority: Critical > Fix For: YARN-5355, YARN-5355-branch-2, 3.0.0-alpha3 > > Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch, > YARN-6435.YARN-5355.0002.patch > > > It is observed that, even though *metricslimit* is set to 1500, maximum > number of metrics values retrieved is 1000. > This is due to, while creating EntityTable, metrics column family max version > is specified as 1000 which is hardcoded in > {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max > version with following {{MIN(cf max version , user provided max version)}}. > This behavior is contradicting the documentation which claims that > {code} > metricslimit - If specified, defines the number of metrics to return. > Considered only if fields contains METRICS/ALL or metricstoretrieve is > specified. Ignored otherwise. The maximum possible value for metricslimit can > be maximum value of Integer. If it is not specified or has a value less than > 1, and metrics have to be retrieved, then metricslimit will be considered as > 1 i.e. latest single value of metric(s) will be returned. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6561) Update exception message during timeline collector aux service initialization
[ https://issues.apache.org/jira/browse/YARN-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004102#comment-16004102 ] Hudson commented on YARN-6561: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11710 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11710/]) YARN-6561. Update exception message during timeline collector aux (haibochen: rev ab2bb93a2ab1651b73ec9ba2d1deec4deafdecaf) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/PerNodeTimelineCollectorsAuxService.java > Update exception message during timeline collector aux service initialization > - > > Key: YARN-6561 > URL: https://issues.apache.org/jira/browse/YARN-6561 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Vrushali C >Assignee: Vrushali C >Priority: Minor > Fix For: YARN-5355, 3.0.0-alpha3 > > Attachments: YARN-6561.001.patch > > > If the NM is started with timeline service v2 turned off AND aux services > setting still containing "timeline_collector", NM will fail to start up since > the PerNodeTimelineCollectorsAuxService#serviceInit is invoked and it throws > an exception. The exception message is a bit misleading and does not indicate > where the actual misconfiguration is. > We should update the exception message so that the right error is conveyed > and helps the cluster admin/ops to correct the relevant yarn config settings. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-5841) Report only local collectors on node upon resync with RM after RM fails over
[ https://issues.apache.org/jira/browse/YARN-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen reassigned YARN-5841: Assignee: Haibo Chen > Report only local collectors on node upon resync with RM after RM fails over > > > Key: YARN-5841 > URL: https://issues.apache.org/jira/browse/YARN-5841 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Haibo Chen > > As per discussion on YARN-3359, we can potentially optimize reporting of > collectors to RM after RM fails over. > Currently NM would report all the collectors known to itself in HB after > resync with RM. This would mean many NMs' may pretty much report similar set > of collector infos in first NM HB on reconnection. > This JIRA is to explore how to optimize this flow and if possible, fix it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6140) start time key in NM leveldb store should be removed when container is removed
[ https://issues.apache.org/jira/browse/YARN-6140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004087#comment-16004087 ] Haibo Chen commented on YARN-6140: -- [~ajithshetty], any update on this? > start time key in NM leveldb store should be removed when container is removed > -- > > Key: YARN-6140 > URL: https://issues.apache.org/jira/browse/YARN-6140 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager >Affects Versions: YARN-5355 >Reporter: Sangjin Lee >Assignee: Ajith S > > It appears that the start time key is not removed when the container is > removed. The key was introduced in YARN-5792. > I found this while backporting the YARN-5355-branch-2 branch to our internal > branch loosely based on 2.6.0. The {{TestNMLeveldbStateStoreService}} test > was failing because of this. > I'm not sure why we didn't see this earlier. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6559) Findbugs warning in YARN-5355 branch
[ https://issues.apache.org/jira/browse/YARN-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004077#comment-16004077 ] Haibo Chen commented on YARN-6559: -- YARN-6518 now has been cherry-picked into YARN-5355 as well. Will close this jira as a duplicate. > Findbugs warning in YARN-5355 branch > > > Key: YARN-6559 > URL: https://issues.apache.org/jira/browse/YARN-6559 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Vrushali C >Priority: Minor > Attachments: FindBugs Report.html, YARN-6559-YARN-5355.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6518) Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice
[ https://issues.apache.org/jira/browse/YARN-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004075#comment-16004075 ] Haibo Chen commented on YARN-6518: -- Thanks [~Naganarasimha] ! > Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice > > > Key: YARN-6518 > URL: https://issues.apache.org/jira/browse/YARN-6518 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: findbugs > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6518.001.patch > > > There is 1 findbugs warning in hadoop-yarn-server-timelineservice since > switched to spotbugs > # Possible null pointer dereference in > org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl.getEntities(File, > String, TimelineEntityFilters, TimelineDataToRetrieve) due to return value > of called method > See more in > [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-warnings.html] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6518) Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice
[ https://issues.apache.org/jira/browse/YARN-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004067#comment-16004067 ] Naganarasimha G R commented on YARN-6518: - Thanks [~haibochen] for pointing it out, actually had cherry picked to {{YARN-5355-branch-2}} now have cherry picked to YARN-5355 too. > Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice > > > Key: YARN-6518 > URL: https://issues.apache.org/jira/browse/YARN-6518 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: findbugs > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6518.001.patch > > > There is 1 findbugs warning in hadoop-yarn-server-timelineservice since > switched to spotbugs > # Possible null pointer dereference in > org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl.getEntities(File, > String, TimelineEntityFilters, TimelineDataToRetrieve) due to return value > of called method > See more in > [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-warnings.html] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6518) Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice
[ https://issues.apache.org/jira/browse/YARN-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004066#comment-16004066 ] Naganarasimha G R commented on YARN-6518: - Thanks [~haibochen] for pointing it out, actually had cherry picked to {{YARN-5355-branch-2}} now have cherry picked to YARN-5355 too. > Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice > > > Key: YARN-6518 > URL: https://issues.apache.org/jira/browse/YARN-6518 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: findbugs > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6518.001.patch > > > There is 1 findbugs warning in hadoop-yarn-server-timelineservice since > switched to spotbugs > # Possible null pointer dereference in > org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl.getEntities(File, > String, TimelineEntityFilters, TimelineDataToRetrieve) due to return value > of called method > See more in > [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-warnings.html] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-6435: - Fix Version/s: YARN-5355-branch-2 > [ATSv2] Can't retrieve more than 1000 versions of metrics in time series > > > Key: YARN-6435 > URL: https://issues.apache.org/jira/browse/YARN-6435 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Vrushali C >Priority: Critical > Fix For: YARN-5355, YARN-5355-branch-2, 3.0.0-alpha3 > > Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch, > YARN-6435.YARN-5355.0002.patch > > > It is observed that, even though *metricslimit* is set to 1500, maximum > number of metrics values retrieved is 1000. > This is due to, while creating EntityTable, metrics column family max version > is specified as 1000 which is hardcoded in > {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max > version with following {{MIN(cf max version , user provided max version)}}. > This behavior is contradicting the documentation which claims that > {code} > metricslimit - If specified, defines the number of metrics to return. > Considered only if fields contains METRICS/ALL or metricstoretrieve is > specified. Ignored otherwise. The maximum possible value for metricslimit can > be maximum value of Integer. If it is not specified or has a value less than > 1, and metrics have to be retrieved, then metricslimit will be considered as > 1 i.e. latest single value of metric(s) will be returned. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004061#comment-16004061 ] Haibo Chen commented on YARN-6435: -- Will do > [ATSv2] Can't retrieve more than 1000 versions of metrics in time series > > > Key: YARN-6435 > URL: https://issues.apache.org/jira/browse/YARN-6435 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Vrushali C >Priority: Critical > Fix For: YARN-5355, 3.0.0-alpha3 > > Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch, > YARN-6435.YARN-5355.0002.patch > > > It is observed that, even though *metricslimit* is set to 1500, maximum > number of metrics values retrieved is 1000. > This is due to, while creating EntityTable, metrics column family max version > is specified as 1000 which is hardcoded in > {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max > version with following {{MIN(cf max version , user provided max version)}}. > This behavior is contradicting the documentation which claims that > {code} > metricslimit - If specified, defines the number of metrics to return. > Considered only if fields contains METRICS/ALL or metricstoretrieve is > specified. Ignored otherwise. The maximum possible value for metricslimit can > be maximum value of Integer. If it is not specified or has a value less than > 1, and metrics have to be retrieved, then metricslimit will be considered as > 1 i.e. latest single value of metric(s) will be returned. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004059#comment-16004059 ] Rohith Sharma K S commented on YARN-6435: - I think all the fix for YARN-5355 should also back ported to YARN-5355-branch-2. > [ATSv2] Can't retrieve more than 1000 versions of metrics in time series > > > Key: YARN-6435 > URL: https://issues.apache.org/jira/browse/YARN-6435 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Vrushali C >Priority: Critical > Fix For: YARN-5355, 3.0.0-alpha3 > > Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch, > YARN-6435.YARN-5355.0002.patch > > > It is observed that, even though *metricslimit* is set to 1500, maximum > number of metrics values retrieved is 1000. > This is due to, while creating EntityTable, metrics column family max version > is specified as 1000 which is hardcoded in > {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max > version with following {{MIN(cf max version , user provided max version)}}. > This behavior is contradicting the documentation which claims that > {code} > metricslimit - If specified, defines the number of metrics to return. > Considered only if fields contains METRICS/ALL or metricstoretrieve is > specified. Ignored otherwise. The maximum possible value for metricslimit can > be maximum value of Integer. If it is not specified or has a value less than > 1, and metrics have to be retrieved, then metricslimit will be considered as > 1 i.e. latest single value of metric(s) will be returned. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-6564) RM service is shutting down when ATS v2 is enabled
[ https://issues.apache.org/jira/browse/YARN-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S resolved YARN-6564. - Resolution: Not A Problem > RM service is shutting down when ATS v2 is enabled > -- > > Key: YARN-6564 > URL: https://issues.apache.org/jira/browse/YARN-6564 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Akhil PB >Assignee: Rohith Sharma K S >Priority: Critical > > RM shutting down with following error > {code} > 2017-05-05 14:41:06,056 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: > Error in dispatcher thread > java.lang.IllegalAccessError: tried to access method > com.google.common.base.Stopwatch.()V from class > org.apache.hadoop.hbase.zookeeper.MetaTableLocator > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:604) > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588) > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6564) RM service is shutting down when ATS v2 is enabled
[ https://issues.apache.org/jira/browse/YARN-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004051#comment-16004051 ] Rohith Sharma K S commented on YARN-6564: - Yep, will close it. > RM service is shutting down when ATS v2 is enabled > -- > > Key: YARN-6564 > URL: https://issues.apache.org/jira/browse/YARN-6564 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Akhil PB >Assignee: Rohith Sharma K S >Priority: Critical > > RM shutting down with following error > {code} > 2017-05-05 14:41:06,056 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: > Error in dispatcher thread > java.lang.IllegalAccessError: tried to access method > com.google.common.base.Stopwatch.()V from class > org.apache.hadoop.hbase.zookeeper.MetaTableLocator > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:604) > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588) > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6533) Race condition in writing service record to registry in yarn native services
[ https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004045#comment-16004045 ] Jian He commented on YARN-6533: --- In SliderAppMaster Below code also didn't replace '_' with '-'. Is it required to replace '_' with '-' for the YARN_ID ? what's the expectation ? {code} serviceRecord.set(YarnRegistryAttributes.YARN_ID, appId.toString()); {code} > Race condition in writing service record to registry in yarn native services > > > Key: YARN-6533 > URL: https://issues.apache.org/jira/browse/YARN-6533 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-6533-yarn-native-services.001.patch, > YARN-6533-yarn-native-services.002.patch > > > The ServiceRecord is written twice, once when the container is initially > registered and again in the Docker provider once the IP has been obtained for > the container. These occur asynchronously, so the more important record (the > one with the IP) can be overwritten by the initial record. Only one record > needs to be written, so we can stop writing the initial record when the > Docker provider is being used. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6561) Update exception message during timeline collector aux service initialization
[ https://issues.apache.org/jira/browse/YARN-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated YARN-6561: - Summary: Update exception message during timeline collector aux service initialization (was: Update exception information during timeline collector aux service initialization) > Update exception message during timeline collector aux service initialization > - > > Key: YARN-6561 > URL: https://issues.apache.org/jira/browse/YARN-6561 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Vrushali C >Assignee: Vrushali C >Priority: Minor > Attachments: YARN-6561.001.patch > > > If the NM is started with timeline service v2 turned off AND aux services > setting still containing "timeline_collector", NM will fail to start up since > the PerNodeTimelineCollectorsAuxService#serviceInit is invoked and it throws > an exception. The exception message is a bit misleading and does not indicate > where the actual misconfiguration is. > We should update the exception message so that the right error is conveyed > and helps the cluster admin/ops to correct the relevant yarn config settings. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6475) Fix some long function checkstyle issues
[ https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004036#comment-16004036 ] Soumabrata Chakraborty edited comment on YARN-6475 at 5/10/17 4:31 AM: --- [~templedf] Thanks for the review. All code review comments have been implemented. The PR is updated and patch has been posted to the JIRA. was (Author: soumabrata): Patch File > Fix some long function checkstyle issues > > > Key: YARN-6475 > URL: https://issues.apache.org/jira/browse/YARN-6475 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Soumabrata Chakraborty >Priority: Trivial > Labels: newbie > Attachments: YARN-6475.001.patch > > > I am talking about these two: > {code} > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441: > @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength] > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159: > @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength] > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6564) RM service is shutting down when ATS v2 is enabled
[ https://issues.apache.org/jira/browse/YARN-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004041#comment-16004041 ] Vrushali C commented on YARN-6564: -- I think this jira can be closed as "Not a problem"? > RM service is shutting down when ATS v2 is enabled > -- > > Key: YARN-6564 > URL: https://issues.apache.org/jira/browse/YARN-6564 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Akhil PB >Assignee: Rohith Sharma K S >Priority: Critical > > RM shutting down with following error > {code} > 2017-05-05 14:41:06,056 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: > Error in dispatcher thread > java.lang.IllegalAccessError: tried to access method > com.google.common.base.Stopwatch.()V from class > org.apache.hadoop.hbase.zookeeper.MetaTableLocator > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:604) > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588) > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6475) Fix some long function checkstyle issues
[ https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Soumabrata Chakraborty updated YARN-6475: - Attachment: YARN-6475.001.patch Patch File > Fix some long function checkstyle issues > > > Key: YARN-6475 > URL: https://issues.apache.org/jira/browse/YARN-6475 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Soumabrata Chakraborty >Priority: Trivial > Labels: newbie > Attachments: YARN-6475.001.patch > > > I am talking about these two: > {code} > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441: > @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength] > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159: > @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength] > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6564) RM service is shutting down when ATS v2 is enabled
[ https://issues.apache.org/jira/browse/YARN-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004037#comment-16004037 ] Haibo Chen commented on YARN-6564: -- [~rohithsharma] Now that HADOOP-14386 has been resolved, what is the correct move here? > RM service is shutting down when ATS v2 is enabled > -- > > Key: YARN-6564 > URL: https://issues.apache.org/jira/browse/YARN-6564 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Akhil PB >Assignee: Rohith Sharma K S >Priority: Critical > > RM shutting down with following error > {code} > 2017-05-05 14:41:06,056 FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: > Error in dispatcher thread > java.lang.IllegalAccessError: tried to access method > com.google.common.base.Stopwatch.()V from class > org.apache.hadoop.hbase.zookeeper.MetaTableLocator > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:604) > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588) > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vrushali C reassigned YARN-6435: Assignee: Vrushali C (was: Rohith Sharma K S) > [ATSv2] Can't retrieve more than 1000 versions of metrics in time series > > > Key: YARN-6435 > URL: https://issues.apache.org/jira/browse/YARN-6435 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Vrushali C >Priority: Critical > Fix For: YARN-5355, 3.0.0-alpha3 > > Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch, > YARN-6435.YARN-5355.0002.patch > > > It is observed that, even though *metricslimit* is set to 1500, maximum > number of metrics values retrieved is 1000. > This is due to, while creating EntityTable, metrics column family max version > is specified as 1000 which is hardcoded in > {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max > version with following {{MIN(cf max version , user provided max version)}}. > This behavior is contradicting the documentation which claims that > {code} > metricslimit - If specified, defines the number of metrics to return. > Considered only if fields contains METRICS/ALL or metricstoretrieve is > specified. Ignored otherwise. The maximum possible value for metricslimit can > be maximum value of Integer. If it is not specified or has a value less than > 1, and metrics have to be retrieved, then metricslimit will be considered as > 1 i.e. latest single value of metric(s) will be returned. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6280) Add a query parameter in ResourceManager Cluster Applications REST API to control whether or not returns ResourceRequest
[ https://issues.apache.org/jira/browse/YARN-6280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004013#comment-16004013 ] Lantao Jin commented on YARN-6280: -- Thanks [~sunilg]. About (6), the reason use exact *resourceRequests* is from the result of REST API. http:///ws/v1/cluster/apps?states=running,accepted=2 {quote} 4096 1 0 0 true * 2816 1 0 20 true datanode1 2816 1 0 20 true datanode2 {quote} I thing this is the element what a user wants to reduce. But if it described in documentation, *resource-requests* is more friendly to user. > Add a query parameter in ResourceManager Cluster Applications REST API to > control whether or not returns ResourceRequest > > > Key: YARN-6280 > URL: https://issues.apache.org/jira/browse/YARN-6280 > Project: Hadoop YARN > Issue Type: Improvement > Components: resourcemanager, restapi >Affects Versions: 2.7.3 >Reporter: Lantao Jin >Assignee: Lantao Jin > Attachments: YARN-6280.001.patch, YARN-6280.002.patch, > YARN-6280.003.patch, YARN-6280.004.patch, YARN-6280.005.patch, > YARN-6280.006.patch, YARN-6280.007.patch, YARN-6280.008.patch > > > Begin from v2.7, the ResourceManager Cluster Applications REST API returns > ResourceRequest list. It's a very large construction in AppInfo. > As a test, we use below URI to query only 2 results: > http:// address:port>/ws/v1/cluster/apps?states=running,accepted=2 > The results are very different: > ||Hadoop version|Total Character|Total Word|Total Lines|Size|| > |2.4.1|1192| 42| 42| 1.2 KB| > |2.7.1|1222179| 48740| 48735| 1.21 MB| > Most RESTful API requesters don't know about this after upgraded and their > old queries may cause ResourceManager more GC consuming and slower. Even if > they know this but have no idea to reduce the impact of ResourceManager > except slow down their query frequency. > The patch adding a query parameter "showResourceRequests" to help requesters > who don't need this information to reduce the overhead. In consideration of > compatibility of interface, the default value is true if they don't set the > parameter, so the behaviour is the same as now. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy
[ https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003985#comment-16003985 ] Hadoop QA commented on YARN-6571: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 3 unchanged - 5 fixed = 3 total (was 8) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 874 unchanged - 3 fixed = 874 total (was 877) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 41s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 63m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6571 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867231/YARN-6571.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1a75117aeeff 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 166be0e | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15888/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15888/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output |
[jira] [Commented] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003963#comment-16003963 ] Haibo Chen commented on YARN-6435: -- LGTM +1. Will commit it shortly > [ATSv2] Can't retrieve more than 1000 versions of metrics in time series > > > Key: YARN-6435 > URL: https://issues.apache.org/jira/browse/YARN-6435 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch, > YARN-6435.YARN-5355.0002.patch > > > It is observed that, even though *metricslimit* is set to 1500, maximum > number of metrics values retrieved is 1000. > This is due to, while creating EntityTable, metrics column family max version > is specified as 1000 which is hardcoded in > {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max > version with following {{MIN(cf max version , user provided max version)}}. > This behavior is contradicting the documentation which claims that > {code} > metricslimit - If specified, defines the number of metrics to return. > Considered only if fields contains METRICS/ALL or metricstoretrieve is > specified. Ignored otherwise. The maximum possible value for metricslimit can > be maximum value of Integer. If it is not specified or has a value less than > 1, and metrics have to be retrieved, then metricslimit will be considered as > 1 i.e. latest single value of metric(s) will be returned. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3872) TimelineReader Web UI Implementation
[ https://issues.apache.org/jira/browse/YARN-3872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003958#comment-16003958 ] Haibo Chen commented on YARN-3872: -- Thanks [~rohithsharma] for the pointer! > TimelineReader Web UI Implementation > > > Key: YARN-3872 > URL: https://issues.apache.org/jira/browse/YARN-3872 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: YARN-2928 >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: YARN-5355 > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003937#comment-16003937 ] Hadoop QA commented on YARN-6435: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 37s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} YARN-5355 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase in YARN-5355 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} YARN-5355 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6435 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867229/YARN-6435.YARN-5355.0002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 10e17afc400f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-5355 / 4a4ff35 | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15887/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15887/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15887/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated.
[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy
[ https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003929#comment-16003929 ] Weiwei Yang commented on YARN-6571: --- Sure [~templedf], v2 patch uploaded that added the doc for the class as well. > Fix JavaDoc issues in SchedulingPolicy > -- > > Key: YARN-6571 > URL: https://issues.apache.org/jira/browse/YARN-6571 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Weiwei Yang >Priority: Trivial > Labels: newbie > Attachments: YARN-6571.001.patch, YARN-6571.002.patch > > > There are several javadoc issues: > * Class JavaDoc is missing. > * {{getInstance()}} is missing {{@return}} and {{@param}} tags. > * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag. > * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first > sentence. > * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy
[ https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated YARN-6571: -- Attachment: YARN-6571.002.patch > Fix JavaDoc issues in SchedulingPolicy > -- > > Key: YARN-6571 > URL: https://issues.apache.org/jira/browse/YARN-6571 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.8.0 >Reporter: Daniel Templeton >Assignee: Weiwei Yang >Priority: Trivial > Labels: newbie > Attachments: YARN-6571.001.patch, YARN-6571.002.patch > > > There are several javadoc issues: > * Class JavaDoc is missing. > * {{getInstance()}} is missing {{@return}} and {{@param}} tags. > * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag. > * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first > sentence. > * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohith Sharma K S updated YARN-6435: Attachment: YARN-6435.YARN-5355.0002.patch Updated patch reflecting Haibo review comment catch. > [ATSv2] Can't retrieve more than 1000 versions of metrics in time series > > > Key: YARN-6435 > URL: https://issues.apache.org/jira/browse/YARN-6435 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch, > YARN-6435.YARN-5355.0002.patch > > > It is observed that, even though *metricslimit* is set to 1500, maximum > number of metrics values retrieved is 1000. > This is due to, while creating EntityTable, metrics column family max version > is specified as 1000 which is hardcoded in > {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max > version with following {{MIN(cf max version , user provided max version)}}. > This behavior is contradicting the documentation which claims that > {code} > metricslimit - If specified, defines the number of metrics to return. > Considered only if fields contains METRICS/ALL or metricstoretrieve is > specified. Ignored otherwise. The maximum possible value for metricslimit can > be maximum value of Integer. If it is not specified or has a value less than > 1, and metrics have to be retrieved, then metricslimit will be considered as > 1 i.e. latest single value of metric(s) will be returned. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003911#comment-16003911 ] Rohith Sharma K S commented on YARN-6435: - Ah.. My bad. I will update it. > [ATSv2] Can't retrieve more than 1000 versions of metrics in time series > > > Key: YARN-6435 > URL: https://issues.apache.org/jira/browse/YARN-6435 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch > > > It is observed that, even though *metricslimit* is set to 1500, maximum > number of metrics values retrieved is 1000. > This is due to, while creating EntityTable, metrics column family max version > is specified as 1000 which is hardcoded in > {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max > version with following {{MIN(cf max version , user provided max version)}}. > This behavior is contradicting the documentation which claims that > {code} > metricslimit - If specified, defines the number of metrics to return. > Considered only if fields contains METRICS/ALL or metricstoretrieve is > specified. Ignored otherwise. The maximum possible value for metricslimit can > be maximum value of Integer. If it is not specified or has a value less than > 1, and metrics have to be retrieved, then metricslimit will be considered as > 1 i.e. latest single value of metric(s) will be returned. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3872) TimelineReader Web UI Implementation
[ https://issues.apache.org/jira/browse/YARN-3872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003909#comment-16003909 ] Rohith Sharma K S commented on YARN-3872: - [~haibochen] There is separate JIRA has raised for TimelineV2 under YARN-3368 umbrella. We should follow those JIRA. TimelineV2 patches has been under test. cc :/ [~sunilg] would you give more update on # Timelive2 patch testing? # attach screenshot for each TimelieV2 pages.? # Link all timelive v2 related JIRA under YARN-3368 to YARN-5355 branch. > TimelineReader Web UI Implementation > > > Key: YARN-3872 > URL: https://issues.apache.org/jira/browse/YARN-3872 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: YARN-2928 >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: YARN-5355 > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6568) A queue which runs a long time job couldn't acquire any container for long time.
[ https://issues.apache.org/jira/browse/YARN-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003908#comment-16003908 ] zhengchenyu edited comment on YARN-6568 at 5/10/17 1:57 AM: [~yufeigu] Sorry, I didn't express definitely! I said that the minShare1 which is big enough is configured by fair-scheduler.xml, not the variable 'minShare1'. It equals s1.getMinShare. look the code below. if minShare1 which is configured by fair-scheduler.xml is big enough. the variable 'minShare1' equals s1.getDemand. It means the variable 'minShare1' = resourceUsage + request. {code} Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,s1.getMinShare(), s1.getDemand()); {code} look the code below. At this time, minShareRatio1 = resourceUsage1/'minShare1 = resourceUsage1 / (resourceUsage1+request1) {code} minShareRatio1 = (double) resourceUsage1.getMemory()/ Resources.max(RESOURCE_CALCULATOR, null, minShare1, ONE).getMemory(); {code} was (Author: zhengchenyu): [~yufeigu] Sorry, I didn't express definitely! I said that the minShare1 which is big enough is configured by fair-scheduler.xml, not the variable 'minShare1'. It equals s1.getMinShare. look the code below. if minShare1 which is configured by fair-scheduler.xml is big enough. the variable 'minShare1' equals s1.getDemand. It means the variable 'minShare1' = resourceUsage + request. { Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,s1.getMinShare(), s1.getDemand()); } look the code below. At this time, minShareRatio1 = resourceUsage1/'minShare1 = resourceUsage1 / (resourceUsage1+request1) { minShareRatio1 = (double) resourceUsage1.getMemory()/ Resources.max(RESOURCE_CALCULATOR, null, minShare1, ONE).getMemory(); } > A queue which runs a long time job couldn't acquire any container for long > time. > > > Key: YARN-6568 > URL: https://issues.apache.org/jira/browse/YARN-6568 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.7.1 > Environment: CentOS 7.1 >Reporter: zhengchenyu > Fix For: 2.7.4 > > Original Estimate: 1m > Remaining Estimate: 1m > > In our cluster, we find some applications couldn't acquire any container for > long time. (Note: we use FairSharePolicy and FairScheduler) > First, I found some unreasonable configuration, we set minRes=maxRes. So some > application keep pending for long time, we kill some large applicaiton to > solve this problem. Then we changed this configuration, this problem > relieves. > But this problem is not completely solved. In our cluster, I found > applications in some queue which request few container keep pending for long > time. > I simulate in test cluster. I submit DistributedShell application which run > many loo applications to queueA, then I submit my own yarn application which > request container and release container constantly to queueB. At this time, > any applicaitons which are submmited to queueA keep pending! > We know this is the problem of FairSharePolicy, it consider the request of > queue. So after sort the queues, some queues which have few request are > ordered last all time. > We know if the AM container is launched, then the request will increase, But > FairSharePolicy can't distinguish which request is AM request. I think if am > container is assigned, the problem is solved. > Our companion discuss this problem. we recommend set a timeout for queue, it > means the time length of a queue is not assigned. If timeout, we set this > queue to the first place of queues list. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6568) A queue which runs a long time job couldn't acquire any container for long time.
[ https://issues.apache.org/jira/browse/YARN-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003908#comment-16003908 ] zhengchenyu commented on YARN-6568: --- [~yufeigu] Sorry, I didn't express definitely! I said that the minShare1 which is big enough is configured by fair-scheduler.xml, not the variable 'minShare1'. It equals s1.getMinShare. look the code below. if minShare1 which is configured by fair-scheduler.xml is big enough. the variable 'minShare1' equals s1.getDemand. It means the variable 'minShare1' = resourceUsage + request. {{ Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,s1.getMinShare(), s1.getDemand()); }} look the code below. At this time, minShareRatio1 = resourceUsage1/'minShare1 = resourceUsage1 / (resourceUsage1+request1) {{ minShareRatio1 = (double) resourceUsage1.getMemory()/ Resources.max(RESOURCE_CALCULATOR, null, minShare1, ONE).getMemory(); }} > A queue which runs a long time job couldn't acquire any container for long > time. > > > Key: YARN-6568 > URL: https://issues.apache.org/jira/browse/YARN-6568 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.7.1 > Environment: CentOS 7.1 >Reporter: zhengchenyu > Fix For: 2.7.4 > > Original Estimate: 1m > Remaining Estimate: 1m > > In our cluster, we find some applications couldn't acquire any container for > long time. (Note: we use FairSharePolicy and FairScheduler) > First, I found some unreasonable configuration, we set minRes=maxRes. So some > application keep pending for long time, we kill some large applicaiton to > solve this problem. Then we changed this configuration, this problem > relieves. > But this problem is not completely solved. In our cluster, I found > applications in some queue which request few container keep pending for long > time. > I simulate in test cluster. I submit DistributedShell application which run > many loo applications to queueA, then I submit my own yarn application which > request container and release container constantly to queueB. At this time, > any applicaitons which are submmited to queueA keep pending! > We know this is the problem of FairSharePolicy, it consider the request of > queue. So after sort the queues, some queues which have few request are > ordered last all time. > We know if the AM container is launched, then the request will increase, But > FairSharePolicy can't distinguish which request is AM request. I think if am > container is assigned, the problem is solved. > Our companion discuss this problem. we recommend set a timeout for queue, it > means the time length of a queue is not assigned. If timeout, we set this > queue to the first place of queues list. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6568) A queue which runs a long time job couldn't acquire any container for long time.
[ https://issues.apache.org/jira/browse/YARN-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003908#comment-16003908 ] zhengchenyu edited comment on YARN-6568 at 5/10/17 1:56 AM: [~yufeigu] Sorry, I didn't express definitely! I said that the minShare1 which is big enough is configured by fair-scheduler.xml, not the variable 'minShare1'. It equals s1.getMinShare. look the code below. if minShare1 which is configured by fair-scheduler.xml is big enough. the variable 'minShare1' equals s1.getDemand. It means the variable 'minShare1' = resourceUsage + request. { Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,s1.getMinShare(), s1.getDemand()); } look the code below. At this time, minShareRatio1 = resourceUsage1/'minShare1 = resourceUsage1 / (resourceUsage1+request1) { minShareRatio1 = (double) resourceUsage1.getMemory()/ Resources.max(RESOURCE_CALCULATOR, null, minShare1, ONE).getMemory(); } was (Author: zhengchenyu): [~yufeigu] Sorry, I didn't express definitely! I said that the minShare1 which is big enough is configured by fair-scheduler.xml, not the variable 'minShare1'. It equals s1.getMinShare. look the code below. if minShare1 which is configured by fair-scheduler.xml is big enough. the variable 'minShare1' equals s1.getDemand. It means the variable 'minShare1' = resourceUsage + request. {{ Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,s1.getMinShare(), s1.getDemand()); }} look the code below. At this time, minShareRatio1 = resourceUsage1/'minShare1 = resourceUsage1 / (resourceUsage1+request1) {{ minShareRatio1 = (double) resourceUsage1.getMemory()/ Resources.max(RESOURCE_CALCULATOR, null, minShare1, ONE).getMemory(); }} > A queue which runs a long time job couldn't acquire any container for long > time. > > > Key: YARN-6568 > URL: https://issues.apache.org/jira/browse/YARN-6568 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 2.7.1 > Environment: CentOS 7.1 >Reporter: zhengchenyu > Fix For: 2.7.4 > > Original Estimate: 1m > Remaining Estimate: 1m > > In our cluster, we find some applications couldn't acquire any container for > long time. (Note: we use FairSharePolicy and FairScheduler) > First, I found some unreasonable configuration, we set minRes=maxRes. So some > application keep pending for long time, we kill some large applicaiton to > solve this problem. Then we changed this configuration, this problem > relieves. > But this problem is not completely solved. In our cluster, I found > applications in some queue which request few container keep pending for long > time. > I simulate in test cluster. I submit DistributedShell application which run > many loo applications to queueA, then I submit my own yarn application which > request container and release container constantly to queueB. At this time, > any applicaitons which are submmited to queueA keep pending! > We know this is the problem of FairSharePolicy, it consider the request of > queue. So after sort the queues, some queues which have few request are > ordered last all time. > We know if the AM container is launched, then the request will increase, But > FairSharePolicy can't distinguish which request is AM request. I think if am > container is assigned, the problem is solved. > Our companion discuss this problem. we recommend set a timeout for queue, it > means the time length of a queue is not assigned. If timeout, we set this > queue to the first place of queues list. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5413) Create a proxy chain for ResourceManager Admin API in the Router
[ https://issues.apache.org/jira/browse/YARN-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003904#comment-16003904 ] Hadoop QA commented on YARN-5413: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 43s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 58s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 5s{color} | {color:green} YARN-2915 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 59s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in YARN-2915 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} YARN-2915 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 206 unchanged - 0 fixed = 213 total (was 206) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s{color} | {color:green} hadoop-yarn-server-router in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 71m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-5413 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867216/YARN-5413-YARN-2915.v3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 3492e4f4da2f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
[jira] [Commented] (YARN-6160) Create an agent-less docker-less provider in the native services framework
[ https://issues.apache.org/jira/browse/YARN-6160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003903#comment-16003903 ] Hadoop QA commented on YARN-6160: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 31s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 18s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core: The patch generated 21 new + 213 unchanged - 14 fixed = 234 total (was 227) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 49s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core | | | Exception is caught when Exception is not thrown in org.apache.slider.server.appmaster.RoleLaunchService$RoleLauncher.run() At RoleLaunchService.java:is not thrown in org.apache.slider.server.appmaster.RoleLaunchService$RoleLauncher.run() At RoleLaunchService.java:[line 200] | | Failed junit tests | slider.providers.TestProviderFactory | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6160 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867223/YARN-6160-yarn-native-services.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 7c1611d0290c 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision |
[jira] [Commented] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations
[ https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003891#comment-16003891 ] Hadoop QA commented on YARN-6473: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 5s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 58s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.sls.TestSLSRunner | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6473 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867201/YARN-6473.v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b05069e435d5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 166be0e | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15884/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-sls-warnings.html | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15884/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-sls.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15884/testReport/ | | modules | C:
[jira] [Updated] (YARN-6160) Create an agent-less docker-less provider in the native services framework
[ https://issues.apache.org/jira/browse/YARN-6160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-6160: - Attachment: YARN-6160-yarn-native-services.001.patch Here is a first attempt at enabling different providers based on the artifact type of the component. One question I have is whether we should add a new artifact type such as "simple" or "command" that does not even require a tarball, or whether we should make that a special case of the "tarball" artifact type (where the id specified has a particular value such as "none" or "null"). [~jianhe] [~gsaha] Example json: {noformat} { "name": "test1", "lifetime": "3600", "components" : [ { "name": "SLEEP", "number_of_containers": 2, "artifact": { "id": "", "type": "TARBALL" }, "launch_command": "sleep 3600", "resource": { "cpus": 2, "memory": "1024" } }, { "name": "SLEEP_DOCKER", "number_of_containers": 2, "artifact": { "id": "", "type": "DOCKER" }, "launch_command": "sleep 3600", "resource": { "cpus": 2, "memory": "1024" } } ] } {noformat} > Create an agent-less docker-less provider in the native services framework > -- > > Key: YARN-6160 > URL: https://issues.apache.org/jira/browse/YARN-6160 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Fix For: yarn-native-services > > Attachments: YARN-6160-yarn-native-services.001.patch > > > The goal of the agent-less docker-less provider is to be able to use the YARN > native services framework when Docker is not installed or other methods of > app resource installation are preferable. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider
[ https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003837#comment-16003837 ] Hadoop QA commented on YARN-5949: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 18s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 56s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} YARN-5734 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 50s{color} | {color:green} YARN-5734 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 5s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager in YARN-5734 has 8 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} YARN-5734 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 12s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 8 new + 325 unchanged - 0 fixed = 333 total (was 325) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 1 new + 880 unchanged - 0 fixed = 881 total (was 880) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 35s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 49s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}102m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | | | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-5949 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867200/YARN-5949-YARN-5734.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ae5c60170a47 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (YARN-6380) FSAppAttempt keeps redundant copy of the queue
[ https://issues.apache.org/jira/browse/YARN-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003834#comment-16003834 ] Karthik Kambatla commented on YARN-6380: At least one of the checkstyle issues looks legit. Mind revving the patch? > FSAppAttempt keeps redundant copy of the queue > -- > > Key: YARN-6380 > URL: https://issues.apache.org/jira/browse/YARN-6380 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-6380.001.patch, YARN-6380.002.patch, > YARN-6380.003.patch, YARN-6380.004.patch > > > The {{FSAppAttempt}} class defines its own {{fsQueue}} variable that is a > second copy of the {{SchedulerApplicationAttempt}}'s {{queue}} variable. > Aside from being redundant, it's also a bug, because when moving > applications, we only update the {{SchedulerApplicationAttempt}}'s {{queue}}, > not the {{FSAppAttempt}}'s {{fsQueue}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6559) Findbugs warning in YARN-5355 branch
[ https://issues.apache.org/jira/browse/YARN-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003823#comment-16003823 ] Haibo Chen commented on YARN-6559: -- I don't see YARN-6518 in YARN-5355 branch either. I have asked Naga in YARN-6518 to cherry-pick. Let's wait and see what he says. > Findbugs warning in YARN-5355 branch > > > Key: YARN-6559 > URL: https://issues.apache.org/jira/browse/YARN-6559 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Vrushali C >Priority: Minor > Attachments: FindBugs Report.html, YARN-6559-YARN-5355.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6518) Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice
[ https://issues.apache.org/jira/browse/YARN-6518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003822#comment-16003822 ] Haibo Chen commented on YARN-6518: -- [~Naganarasimha] Can you please cherry-pick this into YARN-5355 branch as well? > Fix warnings from Spotbugs in hadoop-yarn-server-timelineservice > > > Key: YARN-6518 > URL: https://issues.apache.org/jira/browse/YARN-6518 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: findbugs > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6518.001.patch > > > There is 1 findbugs warning in hadoop-yarn-server-timelineservice since > switched to spotbugs > # Possible null pointer dereference in > org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineReaderImpl.getEntities(File, > String, TimelineEntityFilters, TimelineDataToRetrieve) due to return value > of called method > See more in > [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-warnings.html] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6323) Rolling upgrade/config change is broken on timeline v2.
[ https://issues.apache.org/jira/browse/YARN-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003803#comment-16003803 ] Vrushali C commented on YARN-6323: -- bq. However, in the upgrade path, we could choose to not write anything or have a null writer. So while testing on an NM, I was pleasantly surprised to see this message in the NM logs. We do have defensive code right at the writer end so that we don't end up trying to write nulls to the backend as part of row keys. https://github.com/apache/hadoop/blob/YARN-5355/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java#L131 That said, the NPE when trying to read previous app state still needs to be fixed. https://github.com/apache/hadoop/blob/YARN-5355/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java#L387 I have patch on that, will upload it shortly. > Rolling upgrade/config change is broken on timeline v2. > > > Key: YARN-6323 > URL: https://issues.apache.org/jira/browse/YARN-6323 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Li Lu >Assignee: Vrushali C > Labels: yarn-5355-merge-blocker > > Found this issue when deploying on real clusters. If there are apps running > when we enable timeline v2 (with work preserving restart enabled), node > managers will fail to start due to missing app context data. We should > probably assign some default names to these "left over" apps. I believe it's > suboptimal to let users clean up the whole cluster before enabling timeline > v2. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5413) Create a proxy chain for ResourceManager Admin API in the Router
[ https://issues.apache.org/jira/browse/YARN-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003794#comment-16003794 ] Giovanni Matteo Fumarola commented on YARN-5413: Thanks [~subru] for the feedback. I fixed some of the checkstyle warnings and the missing javadocs. > Create a proxy chain for ResourceManager Admin API in the Router > > > Key: YARN-5413 > URL: https://issues.apache.org/jira/browse/YARN-5413 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-5413-YARN-2915.v1.patch, > YARN-5413-YARN-2915.v2.patch, YARN-5413-YARN-2915.v3.patch > > > As detailed in the proposal in the umbrella JIRA, we are introducing a new > component that routes client request to appropriate ResourceManager(s). This > JIRA tracks the creation of a proxy for ResourceManager Admin API in the > Router. This provides a placeholder for: > 1) throttling mis-behaving clients (YARN-1546) > 3) mask the access to multiple RMs (YARN-3659) > We are planning to follow the interceptor pattern like we did in YARN-2884 to > generalize the approach and have only dynamically coupling for Federation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue
[ https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003792#comment-16003792 ] Wangda Tan commented on YARN-2113: -- Thanks [~sunilg], Few comments: 1) INTRAQUEUE_PREEMPTION_ORDER => _POLICY (Since it is not only ordering, but affect can/cannot do preemption). Some internal fields / naming need to be updated as well. 2) Javadocs issues in calculateIdealAssignedResourcePerApp Beyond the two, +1 to latest patch. > Add cross-user preemption within CapacityScheduler's leaf-queue > --- > > Key: YARN-2113 > URL: https://issues.apache.org/jira/browse/YARN-2113 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Vinod Kumar Vavilapalli >Assignee: Sunil G > Attachments: IntraQueue Preemption-Impact Analysis.pdf, > TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt, > YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, > YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, > YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, > YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, > YARN-2113.0013.patch, YARN-2113.0014.patch, > YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 Intra-QueuePreemption > Behavior.pdf, YARN-2113.v0.patch > > > Preemption today only works across queues and moves around resources across > queues per demand and usage. We should also have user-level preemption within > a queue, to balance capacity across users in a predictable manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5413) Create a proxy chain for ResourceManager Admin API in the Router
[ https://issues.apache.org/jira/browse/YARN-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-5413: --- Attachment: YARN-5413-YARN-2915.v3.patch > Create a proxy chain for ResourceManager Admin API in the Router > > > Key: YARN-5413 > URL: https://issues.apache.org/jira/browse/YARN-5413 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-5413-YARN-2915.v1.patch, > YARN-5413-YARN-2915.v2.patch, YARN-5413-YARN-2915.v3.patch > > > As detailed in the proposal in the umbrella JIRA, we are introducing a new > component that routes client request to appropriate ResourceManager(s). This > JIRA tracks the creation of a proxy for ResourceManager Admin API in the > Router. This provides a placeholder for: > 1) throttling mis-behaving clients (YARN-1546) > 3) mask the access to multiple RMs (YARN-3659) > We are planning to follow the interceptor pattern like we did in YARN-2884 to > generalize the approach and have only dynamically coupling for Federation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5841) Report only local collectors on node upon resync with RM after RM fails over
[ https://issues.apache.org/jira/browse/YARN-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003745#comment-16003745 ] Haibo Chen commented on YARN-5841: -- NMs maintains a set of registered collectors that they send to RM during heartbeat. Tracking the code, a collector is added to the set of registered collectors only if an AM container is launched. Thus, NMs are already reporting only local collectors to RM. Unless I am missing something, I think this jira can be closed as not a problem. > Report only local collectors on node upon resync with RM after RM fails over > > > Key: YARN-5841 > URL: https://issues.apache.org/jira/browse/YARN-5841 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena > > As per discussion on YARN-3359, we can potentially optimize reporting of > collectors to RM after RM fails over. > Currently NM would report all the collectors known to itself in HB after > resync with RM. This would mean many NMs' may pretty much report similar set > of collector infos in first NM HB on reconnection. > This JIRA is to explore how to optimize this flow and if possible, fix it. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6380) FSAppAttempt keeps redundant copy of the queue
[ https://issues.apache.org/jira/browse/YARN-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003730#comment-16003730 ] Hadoop QA commented on YARN-6380: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 20s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 17s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6380 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867196/YARN-6380.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1b9ab9a7f0ca 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 166be0e | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15882/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15882/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15882/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > FSAppAttempt keeps redundant copy of the
[jira] [Commented] (YARN-5006) ResourceManager quit due to ApplicationStateData exceed the limit size of znode in zk
[ https://issues.apache.org/jira/browse/YARN-5006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003712#comment-16003712 ] Daniel Templeton commented on YARN-5006: Thanks for the patch, [~bibinchundatt]. A few comments: * {{RM_ZK_NUM_ZNODE_SIZE_LIMIT}} would be clearer as {{RM_ZK_ZNODE_SIZE_LIMIT_BYTES}}. Even better, could we make it generic? {{RM_APP_DATA_SIZE_LIMIT_BYTES}} * I don't think the {{isRejectApp()}} method makes it clearer; just inline the _instanceof_. * {{StoreLimitException}} needs javadoc to explain what it should be used for. * {{super("RMStateStore not limit reached");}} could use a clearer message, like: "Application exceeds the maximum allowed size for application data. See yarn.resourcemanager.whatever.max-data-size.bytes." Same goes for the message in {{storeApplicationStateInternal()}}. * Please add javadoc for the new methods in {{RMAppEvent}}. * Please add some additional unit tests to cover the new behavior. > ResourceManager quit due to ApplicationStateData exceed the limit size of > znode in zk > -- > > Key: YARN-5006 > URL: https://issues.apache.org/jira/browse/YARN-5006 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.6.0, 2.7.2 >Reporter: dongtingting >Assignee: Bibin A Chundatt >Priority: Critical > Attachments: YARN-5006.001.patch > > > Client submit a job, this job add 1 file into DistributedCache. when the > job is submitted, ResourceManager sotre ApplicationStateData into zk. > ApplicationStateData is exceed the limit size of znode. RM exit 1. > The related code in RMStateStore.java : > {code} > private static class StoreAppTransition > implements SingleArcTransition{ > @Override > public void transition(RMStateStore store, RMStateStoreEvent event) { > if (!(event instanceof RMStateStoreAppEvent)) { > // should never happen > LOG.error("Illegal event type: " + event.getClass()); > return; > } > ApplicationState appState = ((RMStateStoreAppEvent) > event).getAppState(); > ApplicationId appId = appState.getAppId(); > ApplicationStateData appStateData = ApplicationStateData > .newInstance(appState); > LOG.info("Storing info for app: " + appId); > try { > store.storeApplicationStateInternal(appId, appStateData); //store > the appStateData > store.notifyApplication(new RMAppEvent(appId, >RMAppEventType.APP_NEW_SAVED)); > } catch (Exception e) { > LOG.error("Error storing app: " + appId, e); > store.notifyStoreOperationFailed(e); //handle fail event, system > exit > } > }; > } > {code} > The Exception log: > {code} > ... > 2016-04-20 11:26:35,732 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore > AsyncDispatcher event handler: Maxed out ZK retries. Giving up! > 2016-04-20 11:26:35,732 ERROR > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore > AsyncDispatcher event handler: Error storing app: > application_1461061795989_17671 > org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode > = ConnectionLoss > at > org.apache.zookeeper.KeeperException.create(KeeperException.java:99) > at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:931) > at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:911) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:936) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:933) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1075) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1096) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:933) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:947) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:956) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:626) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:138) > at >
[jira] [Commented] (YARN-6504) Add support for resource profiles in MapReduce
[ https://issues.apache.org/jira/browse/YARN-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003702#comment-16003702 ] Daniel Templeton commented on YARN-6504: Forgot to ask about additional tests, too. > Add support for resource profiles in MapReduce > -- > > Key: YARN-6504 > URL: https://issues.apache.org/jira/browse/YARN-6504 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-6504-YARN-3926.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6484) [Documentation] Documenting the YARN Federation feature
[ https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003683#comment-16003683 ] Hadoop QA commented on YARN-6484: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 9s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} YARN-2915 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6484 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867193/YARN-6484-YARN-2915.v1.patch | | Optional Tests | asflicense mvnsite xml | | uname | Linux a19729e30401 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 0a93a32 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/15881/artifact/patchprocess/whitespace-eol.txt | | modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15881/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [Documentation] Documenting the YARN Federation feature > --- > > Key: YARN-6484 > URL: https://issues.apache.org/jira/browse/YARN-6484 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Subru Krishnan >Assignee: Carlo Curino > Attachments: YARN-6484-YARN-2915.v0.patch, > YARN-6484-YARN-2915.v1.patch > > > We should document the high level design and configuration to enable YARN > Federation -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6475) Fix some long function checkstyle issues
[ https://issues.apache.org/jira/browse/YARN-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003682#comment-16003682 ] Daniel Templeton commented on YARN-6475: Thanks for working on the patch, [~soumabrata]. Looks generally good. A few comments: * In {{buildContainerRuntimeContext()}}, {{runAsUser}} is only used once, so you may as well inline it like you did with the rest of the attribute values. * In {{deriveContainerWorkDir()}}, the {{StringBuilder}} is unnecessary. The compiler will turn + concatenation into a {{StringBuilder}} under the covers. If you think + concatenation would be easier to read, feel free to use it. Or indeed, this change may not be necessary. * In {{prepareContainer()}}, your code: {code}exec.prepareContainer(new ContainerPrepareContext.Builder() .setContainer(container) .setLocalizedResources(localResources) .setUser(container.getUser()) .setContainerLocalDirs(containerLocalDirs) .setCommands(container.getLaunchContext() .getCommands()).build());{code} might be better formatted as: {code} exec.prepareContainer(new ContainerPrepareContext.Builder() .setContainer(container) .setLocalizedResources(localResources) .setUser(container.getUser()) .setContainerLocalDirs(containerLocalDirs) .setCommands(container.getLaunchContext().getCommands()) .build());{code} * In {{StatusUpdaterRunnable}}, we don't need an empty line after the class declaration. * In {{StatusUpdaterRunnable}}, it looks like you have several places with 8-space indentation where 4-space should do. * In {{StatusUpdaterRunnable.run()}}, you may as well replace {{if (containersToSignal.size() != 0)}} with {{if (!containersToSignal.isEmpty())}} * In {{StatusUpdaterRunnable.run()}}, you should also fix the {{catch (Throwable)}}. Either make it a multi-catch instead or just catch {{Exception}}. Also, please post a patch from the PR to this JIRA so that the Jenkins pre-commit has something to chew on. > Fix some long function checkstyle issues > > > Key: YARN-6475 > URL: https://issues.apache.org/jira/browse/YARN-6475 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Soumabrata Chakraborty >Priority: Trivial > Labels: newbie > > I am talking about these two: > {code} > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java:441: > @Override:3: Method length is 176 lines (max allowed is 150). [MethodLength] > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java:159: > @Override:3: Method length is 158 lines (max allowed is 150). [MethodLength] > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6504) Add support for resource profiles in MapReduce
[ https://issues.apache.org/jira/browse/YARN-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003679#comment-16003679 ] Hadoop QA commented on YARN-6504: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 2s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 35s{color} | {color:green} YARN-3926 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 8s{color} | {color:red} root in YARN-3926 failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 23s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 19s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 4s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 32s{color} | {color:green} YARN-3926 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 36s{color} | {color:green} YARN-3926 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 21s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 21s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 19s{color} | {color:orange} root: The patch generated 19 new + 1044 unchanged - 4 fixed = 1063 total (was 1048) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 0s{color} | {color:red} hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 32s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api generated 1 new + 123 unchanged - 0 fixed = 124 total (was 123) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 50s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 12s{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 24s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 35s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 56s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}183m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app | | | Invocation of toString on RMContainerRequestor$ContainerRequest.hosts in org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor$ContainerRequest.toString() At RMContainerRequestor.java:in org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor$ContainerRequest.toString() At
[jira] [Commented] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations
[ https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003675#comment-16003675 ] Carlo Curino commented on YARN-6473: I addressed the checkstyle, the findbugs is already present, and the unit tests is flacky (sometime passes sometime doesn't) and it fails in trunk as well: YARN-5240 > Create ReservationInvariantChecker to validate ReservationSystem + Scheduler > operations > --- > > Key: YARN-6473 > URL: https://issues.apache.org/jira/browse/YARN-6473 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6473.v0.patch, YARN-6473.v1.patch, > YARN-6473.v2.patch > > > This JIRA tracks an application of YARN-6451 ideas to the ReservationSystem. > It is in particularly useful to create integration tests, or for test > clusters, where we can continuously (and possibly costly) check the > ReservationSystem + Scheduler are operating as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations
[ https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6473: --- Attachment: YARN-6473.v2.patch > Create ReservationInvariantChecker to validate ReservationSystem + Scheduler > operations > --- > > Key: YARN-6473 > URL: https://issues.apache.org/jira/browse/YARN-6473 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6473.v0.patch, YARN-6473.v1.patch, > YARN-6473.v2.patch > > > This JIRA tracks an application of YARN-6451 ideas to the ReservationSystem. > It is in particularly useful to create integration tests, or for test > clusters, where we can continuously (and possibly costly) check the > ReservationSystem + Scheduler are operating as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider
[ https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003668#comment-16003668 ] Jonathan Hung commented on YARN-5949: - Attached 003 patch which pushes the client request -> key value map transformation down from RMWebServices to MutableCSConfigurationProvider, to avoid weird parsing logic. > Add pluggable configuration policy interface as a component of > MutableCSConfigurationProvider > - > > Key: YARN-5949 > URL: https://issues.apache.org/jira/browse/YARN-5949 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-5949-YARN-5734.001.patch, > YARN-5949-YARN-5734.002.patch, YARN-5949-YARN-5734.003.patch > > > This will allow different policies to customize how/if configuration changes > should be applied (for example, a policy might restrict whether a > configuration change by a certain user is allowed). This will be enforced by > the MutableCSConfigurationProvider. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6544) Add Null check RegistryDNS service while parsing registry records
[ https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003669#comment-16003669 ] Gour Saha commented on YARN-6544: - [~karams] thanks for adding the UT. The patch looks great now. Just fix the 2 whitespace related issues and I will commit the patch. > Add Null check RegistryDNS service while parsing registry records > - > > Key: YARN-6544 > URL: https://issues.apache.org/jira/browse/YARN-6544 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Affects Versions: yarn-native-services >Reporter: Karam Singh >Assignee: Karam Singh > Fix For: yarn-native-services > > Attachments: YARN-6544-yarn-native-services.001.patch, > YARN-6544-yarn-native-services.002.patch, > YARN-6544-yarn-native-services.002.patch, > YARN-6544-yarn-native-services.003.patch > > > Add Null check RegistryDNS service while parsing registry records for Yarn > persistance attribute. > As of now It assumes that yarn registry record always contain yarn > persistance which is not the case -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider
[ https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung updated YARN-5949: Attachment: YARN-5949-YARN-5734.003.patch > Add pluggable configuration policy interface as a component of > MutableCSConfigurationProvider > - > > Key: YARN-5949 > URL: https://issues.apache.org/jira/browse/YARN-5949 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-5949-YARN-5734.001.patch, > YARN-5949-YARN-5734.002.patch, YARN-5949-YARN-5734.003.patch > > > This will allow different policies to customize how/if configuration changes > should be applied (for example, a policy might restrict whether a > configuration change by a certain user is allowed). This will be enforced by > the MutableCSConfigurationProvider. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6533) Race condition in writing service record to registry in yarn native services
[ https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003665#comment-16003665 ] Hadoop QA commented on YARN-6533: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 27s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 16s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core: The patch generated 1 new + 145 unchanged - 1 fixed = 146 total (was 146) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 30s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6533 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867188/YARN-6533-yarn-native-services.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 538d432deb9e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / 3c9f707 | | Default Java | 1.8.0_131 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15880/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15880/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core | | Console output |
[jira] [Created] (YARN-6576) Improve Diagonstic by moving Error stack trace from NM to slider AM
Yesha Vora created YARN-6576: Summary: Improve Diagonstic by moving Error stack trace from NM to slider AM Key: YARN-6576 URL: https://issues.apache.org/jira/browse/YARN-6576 Project: Hadoop YARN Issue Type: Sub-task Reporter: Yesha Vora Slider Master diagonstics should improve to show root cause of App failures for issues like missing docker image. Currently, Slider Master log does not show proper error message to debug such failure. User have to access Nodemanager logs to find out root cause of such issues where container failed to start. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3742) YARN RM will shut down if ZKClient creation times out
[ https://issues.apache.org/jira/browse/YARN-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003652#comment-16003652 ] Hudson commented on YARN-3742: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11708 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11708/]) YARN-3742. YARN RM will shut down if ZKClient creation times out. (kasha: rev 166be0ee95d5ef976f074342656b289b41a11ccd) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMCriticalThreadUncaughtExceptionHandler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestRMFailover.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMFatalEvent.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ActiveStandbyElectorBasedElectorService.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestMemoryRMStateStore.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMFatalEventType.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java > YARN RM will shut down if ZKClient creation times out > --- > > Key: YARN-3742 > URL: https://issues.apache.org/jira/browse/YARN-3742 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.0 >Reporter: Wilfred Spiegelenburg >Assignee: Daniel Templeton > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-3742.001.patch, YARN-3742.002.patch > > > The RM goes down showing the following stacktrace if the ZK client connection > fails to be created. We should not exit but transition to StandBy and stop > doing things and let the other RM take over. > {code} > 2015-04-19 01:22:20,513 FATAL > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received a > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEvent of type > STATE_STORE_OP_FAILED. Cause: > java.io.IOException: Wait for ZKClient creation timed out > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1066) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1090) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.existsWithRetries(ZKRMStateStore.java:996) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.updateApplicationStateInternal(ZKRMStateStore.java:643) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:162) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:147) > at > org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362) > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:806) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:879) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:874) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) > at
[jira] [Commented] (YARN-6380) FSAppAttempt keeps redundant copy of the queue
[ https://issues.apache.org/jira/browse/YARN-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003641#comment-16003641 ] Karthik Kambatla commented on YARN-6380: +1, pending Jenkins. > FSAppAttempt keeps redundant copy of the queue > -- > > Key: YARN-6380 > URL: https://issues.apache.org/jira/browse/YARN-6380 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-6380.001.patch, YARN-6380.002.patch, > YARN-6380.003.patch, YARN-6380.004.patch > > > The {{FSAppAttempt}} class defines its own {{fsQueue}} variable that is a > second copy of the {{SchedulerApplicationAttempt}}'s {{queue}} variable. > Aside from being redundant, it's also a bug, because when moving > applications, we only update the {{SchedulerApplicationAttempt}}'s {{queue}}, > not the {{FSAppAttempt}}'s {{fsQueue}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6380) FSAppAttempt keeps redundant copy of the queue
[ https://issues.apache.org/jira/browse/YARN-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated YARN-6380: --- Attachment: YARN-6380.004.patch Updated the patch to clean up the {{FSAppAttempt.getQueue()}} method. > FSAppAttempt keeps redundant copy of the queue > -- > > Key: YARN-6380 > URL: https://issues.apache.org/jira/browse/YARN-6380 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Affects Versions: 3.0.0-alpha2 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Attachments: YARN-6380.001.patch, YARN-6380.002.patch, > YARN-6380.003.patch, YARN-6380.004.patch > > > The {{FSAppAttempt}} class defines its own {{fsQueue}} variable that is a > second copy of the {{SchedulerApplicationAttempt}}'s {{queue}} variable. > Aside from being redundant, it's also a bug, because when moving > applications, we only update the {{SchedulerApplicationAttempt}}'s {{queue}}, > not the {{FSAppAttempt}}'s {{fsQueue}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6533) Race condition in writing service record to registry in yarn native services
[ https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003636#comment-16003636 ] Billie Rinaldi commented on YARN-6533: -- Thanks for the review, [~jianhe]. I am attaching a new patch that allows the registerComponent method to be called, but removes the initial registration of the service record. (After thinking about it some more, I don't think any providers will want this partial record to be written.) I also found a potentially bad bug where the YARN_ID was not being encoded when the service record was updated. > Race condition in writing service record to registry in yarn native services > > > Key: YARN-6533 > URL: https://issues.apache.org/jira/browse/YARN-6533 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-6533-yarn-native-services.001.patch, > YARN-6533-yarn-native-services.002.patch > > > The ServiceRecord is written twice, once when the container is initially > registered and again in the Docker provider once the IP has been obtained for > the container. These occur asynchronously, so the more important record (the > one with the IP) can be overwritten by the initial record. Only one record > needs to be written, so we can stop writing the initial record when the > Docker provider is being used. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6435) [ATSv2] Can't retrieve more than 1000 versions of metrics in time series
[ https://issues.apache.org/jira/browse/YARN-6435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003634#comment-16003634 ] Haibo Chen commented on YARN-6435: -- Thanks for the patch, [~rohithsharma]. The max version is still being set to DEFAULT_METRICS_MAX_VERSIONS directly, rather than being read from hbaseConf (i.e. it is still not configurable). > [ATSv2] Can't retrieve more than 1000 versions of metrics in time series > > > Key: YARN-6435 > URL: https://issues.apache.org/jira/browse/YARN-6435 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Rohith Sharma K S >Assignee: Rohith Sharma K S >Priority: Critical > Attachments: YARN-6435.0001.patch, YARN-6435.YARN-5355.0001.patch > > > It is observed that, even though *metricslimit* is set to 1500, maximum > number of metrics values retrieved is 1000. > This is due to, while creating EntityTable, metrics column family max version > is specified as 1000 which is hardcoded in > {{EntityTable#DEFAULT_METRICS_MAX_VERSIONS}}. So, HBase will return max > version with following {{MIN(cf max version , user provided max version)}}. > This behavior is contradicting the documentation which claims that > {code} > metricslimit - If specified, defines the number of metrics to return. > Considered only if fields contains METRICS/ALL or metricstoretrieve is > specified. Ignored otherwise. The maximum possible value for metricslimit can > be maximum value of Integer. If it is not specified or has a value less than > 1, and metrics have to be retrieved, then metricslimit will be considered as > 1 i.e. latest single value of metric(s) will be returned. > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3742) YARN RM will shut down if ZKClient creation times out
[ https://issues.apache.org/jira/browse/YARN-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003625#comment-16003625 ] Karthik Kambatla commented on YARN-3742: Just committed to trunk and branch-2. Thanks for fixing this and making the logs actionable, Daniel. > YARN RM will shut down if ZKClient creation times out > --- > > Key: YARN-3742 > URL: https://issues.apache.org/jira/browse/YARN-3742 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.0 >Reporter: Wilfred Spiegelenburg >Assignee: Daniel Templeton > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-3742.001.patch, YARN-3742.002.patch > > > The RM goes down showing the following stacktrace if the ZK client connection > fails to be created. We should not exit but transition to StandBy and stop > doing things and let the other RM take over. > {code} > 2015-04-19 01:22:20,513 FATAL > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received a > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEvent of type > STATE_STORE_OP_FAILED. Cause: > java.io.IOException: Wait for ZKClient creation timed out > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1066) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1090) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.existsWithRetries(ZKRMStateStore.java:996) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.updateApplicationStateInternal(ZKRMStateStore.java:643) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:162) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:147) > at > org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362) > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:806) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:879) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:874) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6504) Add support for resource profiles in MapReduce
[ https://issues.apache.org/jira/browse/YARN-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003617#comment-16003617 ] Daniel Templeton commented on YARN-6504: A few comments, [~vvasudev]: {{TaskAttemptImpl}} * Since you're setting {{resourceProfile}} in the constructor, it would be better not to set it in the declaration. * The {{LOG.info()}} in the constructor should probably be {{LOG.debug()}}. I might also move it into {{getResourceProfile()}}. {{ContainerRequestEvent}} * {{Configuration}} import is unused. * {{resourceProfile}} in the constructor args should probably come right after {{capability}}. * Is it useful to overload {{createContainerRequestEventForFailedContainer()}}? Doesn't look like the 2-arg version is needed anymore. And if you dump the 2-arg version, you can add profile to the 2-arg constructor, making {{createContainerRequestEventForFailedContainer()}} simpler. * Missing javadoc for new accessors. {{RMCommunicator}} * Missing javadoc for {{getResourceProfilesMap()}} {{RMContainerAllocator}} * {{Hamlet}} import is unused. * Might be cleaner to move the logic about calculating a resource from the profile and capability into a method you can reuse. {{RMContainerRequester}} * The profile arg in the {{ContainerRequest}} constructors should come right after capability. {{MRJobConfig}} * {{DEFAULT_REDUCE_RESOURCE_PROFILE}} appears unused. {{ProfileCapability}} * Is it important to fail a null override? I should think it would be friendlier to treat it as {{Resource.newInstance(0, 0)}}. * In {{toResource()}} returning the override if the profile map in empty seems a nonintuitive choice. Why not return the default profile? In any case, the javadoc should explain the expected return values for all the special cases. * {{none}} in {{toResource()}} should be a constant. * In {{toResource()}} the consecutive _if_s in the _for_ loop can be combined. Considering that there could be a large number of resource types, it probably makes more sense to scrap the loop for an _if-memory_ and an _if-cpu_. {{Resource}} * In {{newInstance()}} the _try-catch_ doesn't cover all cases. {{ResourcePBImpl.getResourceInformation()}} throws a {{ResourceNotFoundException}}, which is not a {{YarnException}}. {{TestResourceProfiles}} * In {{testConvertProfileToResourceCapability()}}, the _try_ should start right before the copy. > Add support for resource profiles in MapReduce > -- > > Key: YARN-6504 > URL: https://issues.apache.org/jira/browse/YARN-6504 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-6504-YARN-3926.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3742) YARN RM will shut down if ZKClient creation times out
[ https://issues.apache.org/jira/browse/YARN-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003614#comment-16003614 ] Karthik Kambatla commented on YARN-3742: Missed that. Looks good. +1. > YARN RM will shut down if ZKClient creation times out > --- > > Key: YARN-3742 > URL: https://issues.apache.org/jira/browse/YARN-3742 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.7.0 >Reporter: Wilfred Spiegelenburg >Assignee: Daniel Templeton > Attachments: YARN-3742.001.patch, YARN-3742.002.patch > > > The RM goes down showing the following stacktrace if the ZK client connection > fails to be created. We should not exit but transition to StandBy and stop > doing things and let the other RM take over. > {code} > 2015-04-19 01:22:20,513 FATAL > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Received a > org.apache.hadoop.yarn.server.resourcemanager.RMFatalEvent of type > STATE_STORE_OP_FAILED. Cause: > java.io.IOException: Wait for ZKClient creation timed out > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1066) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1090) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.existsWithRetries(ZKRMStateStore.java:996) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.updateApplicationStateInternal(ZKRMStateStore.java:643) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:162) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore.java:147) > at > org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362) > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:806) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:879) > at > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:874) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:173) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:106) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6484) [Documentation] Documenting the YARN Federation feature
[ https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6484: --- Attachment: YARN-6484-YARN-2915.v1.patch > [Documentation] Documenting the YARN Federation feature > --- > > Key: YARN-6484 > URL: https://issues.apache.org/jira/browse/YARN-6484 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Subru Krishnan >Assignee: Carlo Curino > Attachments: YARN-6484-YARN-2915.v0.patch, > YARN-6484-YARN-2915.v1.patch > > > We should document the high level design and configuration to enable YARN > Federation -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3545) Investigate the concurrency issue with the map of timeline collector
[ https://issues.apache.org/jira/browse/YARN-3545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003581#comment-16003581 ] Vrushali C commented on YARN-3545: -- Hmm, I think we thought the patch was stale even then and decided to come back to it later. I think it might be fine to look at it afresh now. > Investigate the concurrency issue with the map of timeline collector > > > Key: YARN-3545 > URL: https://issues.apache.org/jira/browse/YARN-3545 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Zhijie Shen >Assignee: Li Lu > Labels: YARN-5355, oct16-medium > Attachments: YARN-3545-YARN-2928.000.patch > > > See the discussion in YARN-3390 for details. Let's continue the discussion > here. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6533) Race condition in writing service record to registry in yarn native services
[ https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Billie Rinaldi updated YARN-6533: - Attachment: YARN-6533-yarn-native-services.002.patch > Race condition in writing service record to registry in yarn native services > > > Key: YARN-6533 > URL: https://issues.apache.org/jira/browse/YARN-6533 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-6533-yarn-native-services.001.patch, > YARN-6533-yarn-native-services.002.patch > > > The ServiceRecord is written twice, once when the container is initially > registered and again in the Docker provider once the IP has been obtained for > the container. These occur asynchronously, so the more important record (the > one with the IP) can be overwritten by the initial record. Only one record > needs to be written, so we can stop writing the initial record when the > Docker provider is being used. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider
[ https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003557#comment-16003557 ] Jonathan Hung edited comment on YARN-5949 at 5/9/17 9:08 PM: - Discussed this offline, will upload a patch addressing scheduler agnostic configuration mutation. Also created YARN-6575 for supporting global scheduler configuration mutation. was (Author: jhung): Discussed this offline, will upload a patch addressing scheduler agnostic configuration mutation. Also created YARN-6574 for supporting global scheduler configuration mutation. > Add pluggable configuration policy interface as a component of > MutableCSConfigurationProvider > - > > Key: YARN-5949 > URL: https://issues.apache.org/jira/browse/YARN-5949 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-5949-YARN-5734.001.patch, > YARN-5949-YARN-5734.002.patch > > > This will allow different policies to customize how/if configuration changes > should be applied (for example, a policy might restrict whether a > configuration change by a certain user is allowed). This will be enforced by > the MutableCSConfigurationProvider. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6575) Support global configuration mutation in MutableConfProvider
Jonathan Hung created YARN-6575: --- Summary: Support global configuration mutation in MutableConfProvider Key: YARN-6575 URL: https://issues.apache.org/jira/browse/YARN-6575 Project: Hadoop YARN Issue Type: Sub-task Reporter: Jonathan Hung Assignee: Jonathan Hung Right now mutating configs assumes they are only queue configs. Support should be added to mutate global scheduler configs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Resolved] (YARN-6574) Support global configuration mutation in MutableConfProvider
[ https://issues.apache.org/jira/browse/YARN-6574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Hung resolved YARN-6574. - Resolution: Duplicate Dupe of YARN-6575 (seems you cannot create a ticket and retroactively make it a subtask of another ticket...) > Support global configuration mutation in MutableConfProvider > > > Key: YARN-6574 > URL: https://issues.apache.org/jira/browse/YARN-6574 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: Jonathan Hung >Assignee: Jonathan Hung > > Right now mutating configs assumes they are only queue configs. Support > should be added to mutate global scheduler configs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider
[ https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003557#comment-16003557 ] Jonathan Hung commented on YARN-5949: - Discussed this offline, will upload a patch addressing scheduler agnostic configuration mutation. Also created YARN-6574 for supporting global scheduler configuration mutation. > Add pluggable configuration policy interface as a component of > MutableCSConfigurationProvider > - > > Key: YARN-5949 > URL: https://issues.apache.org/jira/browse/YARN-5949 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-5949-YARN-5734.001.patch, > YARN-5949-YARN-5734.002.patch > > > This will allow different policies to customize how/if configuration changes > should be applied (for example, a policy might restrict whether a > configuration change by a certain user is allowed). This will be enforced by > the MutableCSConfigurationProvider. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6574) Support global configuration mutation in MutableConfProvider
Jonathan Hung created YARN-6574: --- Summary: Support global configuration mutation in MutableConfProvider Key: YARN-6574 URL: https://issues.apache.org/jira/browse/YARN-6574 Project: Hadoop YARN Issue Type: Improvement Reporter: Jonathan Hung Assignee: Jonathan Hung Right now mutating configs assumes they are only queue configs. Support should be added to mutate global scheduler configs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5094) some YARN container events have timestamp of -1 in REST output
[ https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003537#comment-16003537 ] Haibo Chen commented on YARN-5094: -- [~gtCarrera9] Do you mind if I take over? I am looking at YARN-5169, which is the real cause. > some YARN container events have timestamp of -1 in REST output > -- > > Key: YARN-5094 > URL: https://issues.apache.org/jira/browse/YARN-5094 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Affects Versions: YARN-2928 >Reporter: Sangjin Lee >Assignee: Li Lu > Labels: YARN-5355 > Attachments: YARN-5094-YARN-2928.001.patch > > > Some events in the YARN container entities have timestamp of -1. The > RM-generated container events have proper timestamps. It appears that it's > the NM-generated events that have -1: YARN_CONTAINER_CREATED, > YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, > YARN_NM_CONTAINER_LOCALIZATION_STARTED. > In the YARN container page, > {noformat} > { > id: "YARN_CONTAINER_CREATED", > timestamp: -1, > info: { } > }, > { > id: "YARN_CONTAINER_FINISHED", > timestamp: -1, > info: { > YARN_CONTAINER_EXIT_STATUS: 0, > YARN_CONTAINER_STATE: "RUNNING", > YARN_CONTAINER_DIAGNOSTICS_INFO: "" > } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED", > timestamp: -1, > info: { } > }, > { > id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED", > timestamp: -1, > info: { } > } > {noformat} > I think the data itself is OK, but the values are not being populated in the > REST output? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3872) TimelineReader Web UI Implementation
[ https://issues.apache.org/jira/browse/YARN-3872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003512#comment-16003512 ] Haibo Chen commented on YARN-3872: -- [~varun_impala_149e] Is the reader UI based on YARN web ui 2.0, or just the old-school java servlet? > TimelineReader Web UI Implementation > > > Key: YARN-3872 > URL: https://issues.apache.org/jira/browse/YARN-3872 > Project: Hadoop YARN > Issue Type: Sub-task >Affects Versions: YARN-2928 >Reporter: Varun Saxena >Assignee: Varun Saxena > Labels: YARN-5355 > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6533) Race condition in writing service record to registry in yarn native services
[ https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003496#comment-16003496 ] Jian He commented on YARN-6533: --- Actually, there's an issue, previously, componentInstanceStarted is called in registerComponent which will put some initial ATS data, then componentInstanceUpdated will be called. Now only componentInstanceUpdated will be called, some ATS data will be missing because of this. we need to merge componentInstanceStarted into componentInstanceUpdated > Race condition in writing service record to registry in yarn native services > > > Key: YARN-6533 > URL: https://issues.apache.org/jira/browse/YARN-6533 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-6533-yarn-native-services.001.patch > > > The ServiceRecord is written twice, once when the container is initially > registered and again in the Docker provider once the IP has been obtained for > the container. These occur asynchronously, so the more important record (the > one with the IP) can be overwritten by the initial record. Only one record > needs to be written, so we can stop writing the initial record when the > Docker provider is being used. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6484) [Documentation] Documenting the YARN Federation feature
[ https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003480#comment-16003480 ] Hadoop QA commented on YARN-6484: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 57s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} YARN-2915 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 50 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch 18 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6484 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867172/YARN-6484-YARN-2915.v0.patch | | Optional Tests | asflicense mvnsite xml | | uname | Linux 9df4d9465583 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | YARN-2915 / 0a93a32 | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/15879/artifact/patchprocess/whitespace-eol.txt | | whitespace | https://builds.apache.org/job/PreCommit-YARN-Build/15879/artifact/patchprocess/whitespace-tabs.txt | | modules | C: hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: . | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15879/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [Documentation] Documenting the YARN Federation feature > --- > > Key: YARN-6484 > URL: https://issues.apache.org/jira/browse/YARN-6484 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Subru Krishnan >Assignee: Carlo Curino > Attachments: YARN-6484-YARN-2915.v0.patch > > > We should document the high level design and configuration to enable YARN > Federation -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6559) Findbugs warning in YARN-5355 branch
[ https://issues.apache.org/jira/browse/YARN-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003476#comment-16003476 ] Vrushali C commented on YARN-6559: -- Huh, interesting. Naga does mention that he has committed to YARN-5355 but why don't we see it? Am I missing something. > Findbugs warning in YARN-5355 branch > > > Key: YARN-6559 > URL: https://issues.apache.org/jira/browse/YARN-6559 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Vrushali C >Priority: Minor > Attachments: FindBugs Report.html, YARN-6559-YARN-5355.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3545) Investigate the concurrency issue with the map of timeline collector
[ https://issues.apache.org/jira/browse/YARN-3545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003479#comment-16003479 ] Haibo Chen commented on YARN-3545: -- Looking at this as it seems related to YARN-6563. [~vrushalic], still remember the reason why Li and you decided to not proceed? > Investigate the concurrency issue with the map of timeline collector > > > Key: YARN-3545 > URL: https://issues.apache.org/jira/browse/YARN-3545 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Zhijie Shen >Assignee: Li Lu > Labels: YARN-5355, oct16-medium > Attachments: YARN-3545-YARN-2928.000.patch > > > See the discussion in YARN-3390 for details. Let's continue the discussion > here. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6533) Race condition in writing service record to registry in yarn native services
[ https://issues.apache.org/jira/browse/YARN-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003474#comment-16003474 ] Jian He commented on YARN-6533: --- +1, committing > Race condition in writing service record to registry in yarn native services > > > Key: YARN-6533 > URL: https://issues.apache.org/jira/browse/YARN-6533 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Billie Rinaldi >Assignee: Billie Rinaldi > Attachments: YARN-6533-yarn-native-services.001.patch > > > The ServiceRecord is written twice, once when the container is initially > registered and again in the Docker provider once the IP has been obtained for > the container. These occur asynchronously, so the more important record (the > one with the IP) can be overwritten by the initial record. Only one record > needs to be written, so we can stop writing the initial record when the > Docker provider is being used. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6545) Followup fix for YARN-6405
[ https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003462#comment-16003462 ] Billie Rinaldi commented on YARN-6545: -- +1 for patch 07. > Followup fix for YARN-6405 > -- > > Key: YARN-6545 > URL: https://issues.apache.org/jira/browse/YARN-6545 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-6545.yarn-native-services.01.patch, > YARN-6545.yarn-native-services.02.patch, > YARN-6545.yarn-native-services.03.patch, > YARN-6545.yarn-native-services.04.patch, > YARN-6545.yarn-native-services.05.patch, > YARN-6545.yarn-native-services.06.patch, > YARN-6545.yarn-native-services.07.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5301) NM mount cpu cgroups failed on some systems
[ https://issues.apache.org/jira/browse/YARN-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003452#comment-16003452 ] Hudson commented on YARN-5301: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11707 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11707/]) YARN-5301. NM mount cpu cgroups failed on some systems (Contributed by (templedf: rev a2f680493f040704e2b85108e286731ee3860a52) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestResourceHandlerModule.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java > NM mount cpu cgroups failed on some systems > --- > > Key: YARN-5301 > URL: https://issues.apache.org/jira/browse/YARN-5301 > Project: Hadoop YARN > Issue Type: Bug >Reporter: sandflee >Assignee: Miklos Szegedi > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-5301.000.patch, YARN-5301.001.patch, > YARN-5301.002.patch, YARN-5301.003.patch, YARN-5301.004.patch, > YARN-5301.005.patch, YARN-5301.006.patch, YARN-5301.007.patch, > YARN-5301.008.patch, YARN-5301.009.patch, YARN-5301.010.patch > > > on ubuntu with linux kernel 3.19, , NM start failed if enable auto mount > cgroup. try command: > ./bin/container-executor --mount-cgroups yarn-hadoop cpu=/cgroup/cpufail > ./bin/container-executor --mount-cgroups yarn-hadoop cpu,cpuacct=/cgroup/cpu > succ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6484) [Documentation] Documenting the YARN Federation feature
[ https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino reassigned YARN-6484: -- Assignee: Carlo Curino Affects Version/s: YARN-2915 > [Documentation] Documenting the YARN Federation feature > --- > > Key: YARN-6484 > URL: https://issues.apache.org/jira/browse/YARN-6484 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Affects Versions: YARN-2915 >Reporter: Subru Krishnan >Assignee: Carlo Curino > Attachments: YARN-6484-YARN-2915.v0.patch > > > We should document the high level design and configuration to enable YARN > Federation -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6484) [Documentation] Documenting the YARN Federation feature
[ https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003428#comment-16003428 ] Carlo Curino commented on YARN-6484: Initial version of documentation produce from old docs, and current conf. [~botong] can you please take a look? Make sure the docs are aligned with the latest set of changes. > [Documentation] Documenting the YARN Federation feature > --- > > Key: YARN-6484 > URL: https://issues.apache.org/jira/browse/YARN-6484 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan > Attachments: YARN-6484-YARN-2915.v0.patch > > > We should document the high level design and configuration to enable YARN > Federation -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6484) [Documentation] Documenting the YARN Federation feature
[ https://issues.apache.org/jira/browse/YARN-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6484: --- Attachment: YARN-6484-YARN-2915.v0.patch > [Documentation] Documenting the YARN Federation feature > --- > > Key: YARN-6484 > URL: https://issues.apache.org/jira/browse/YARN-6484 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan > Attachments: YARN-6484-YARN-2915.v0.patch > > > We should document the high level design and configuration to enable YARN > Federation -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6561) Update exception information during timeline collector aux service initialization
[ https://issues.apache.org/jira/browse/YARN-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003418#comment-16003418 ] Haibo Chen commented on YARN-6561: -- +1 on the patch. Will wait until the end of today to commit > Update exception information during timeline collector aux service > initialization > - > > Key: YARN-6561 > URL: https://issues.apache.org/jira/browse/YARN-6561 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Vrushali C >Assignee: Vrushali C >Priority: Minor > Attachments: YARN-6561.001.patch > > > If the NM is started with timeline service v2 turned off AND aux services > setting still containing "timeline_collector", NM will fail to start up since > the PerNodeTimelineCollectorsAuxService#serviceInit is invoked and it throws > an exception. The exception message is a bit misleading and does not indicate > where the actual misconfiguration is. > We should update the exception message so that the right error is conveyed > and helps the cluster admin/ops to correct the relevant yarn config settings. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6559) Findbugs warning in YARN-5355 branch
[ https://issues.apache.org/jira/browse/YARN-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003410#comment-16003410 ] Haibo Chen commented on YARN-6559: -- Wait, this looks like YARN-6518? Should we cherry-pick that instead? > Findbugs warning in YARN-5355 branch > > > Key: YARN-6559 > URL: https://issues.apache.org/jira/browse/YARN-6559 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Vrushali C >Priority: Minor > Attachments: FindBugs Report.html, YARN-6559-YARN-5355.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6559) Findbugs warning in YARN-5355 branch
[ https://issues.apache.org/jira/browse/YARN-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003402#comment-16003402 ] Haibo Chen commented on YARN-6559: -- Thanks for pointing that out. +1 > Findbugs warning in YARN-5355 branch > > > Key: YARN-6559 > URL: https://issues.apache.org/jira/browse/YARN-6559 > Project: Hadoop YARN > Issue Type: Sub-task > Components: timelineserver >Reporter: Varun Saxena >Assignee: Vrushali C >Priority: Minor > Attachments: FindBugs Report.html, YARN-6559-YARN-5355.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-3742) YARN RM will shut down if ZKClient creation times out
[ https://issues.apache.org/jira/browse/YARN-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003374#comment-16003374 ] Hadoop QA commented on YARN-3742: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 132 unchanged - 4 fixed = 133 total (was 136) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 34s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 50s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}115m 59s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-3742 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12863141/YARN-3742.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 3462e2ad67dc 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 45382bf | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15874/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15874/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results |
[jira] [Commented] (YARN-5531) UnmanagedAM pool manager for federating application across clusters
[ https://issues.apache.org/jira/browse/YARN-5531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003365#comment-16003365 ] Hadoop QA commented on YARN-5531: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 3s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 17s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 31s{color} | {color:green} YARN-2915 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 30s{color} | {color:green} YARN-2915 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 10s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in YARN-2915 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 50s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in YARN-2915 has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} YARN-2915 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 36s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 53s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 48 unchanged - 1 fixed = 49 total (was 49) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 19s{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 2s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 4s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}128m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-5531 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867138/YARN-5531-YARN-2915.v10.patch | | Optional Tests | asflicense compile javac javadoc
[jira] [Updated] (YARN-6504) Add support for resource profiles in MapReduce
[ https://issues.apache.org/jira/browse/YARN-6504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Vasudev updated YARN-6504: Attachment: YARN-6504-YARN-3926.001.patch > Add support for resource profiles in MapReduce > -- > > Key: YARN-6504 > URL: https://issues.apache.org/jira/browse/YARN-6504 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: YARN-6504-YARN-3926.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-4476) Matcher for complex node label expresions
[ https://issues.apache.org/jira/browse/YARN-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003293#comment-16003293 ] Hadoop QA commented on YARN-4476: - | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 22s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 28 new + 0 unchanged - 0 fixed = 28 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 49s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-4476 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12867140/YARN-4476.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux bf12a20e8faa 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 45382bf | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15877/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15877/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15877/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Matcher for complex node label expresions > - > > Key: YARN-4476 > URL:
[jira] [Commented] (YARN-6563) ConcurrentModificationException in TimelineCollectorManager while stopping RM
[ https://issues.apache.org/jira/browse/YARN-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003278#comment-16003278 ] Hudson commented on YARN-6563: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11706 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11706/]) YARN-6563 ConcurrentModificationException in TimelineCollectorManager (vrushali: rev 7dd258d8f4aef594346e874e5ad4ba22c3171cd1) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager.java > ConcurrentModificationException in TimelineCollectorManager while stopping RM > - > > Key: YARN-6563 > URL: https://issues.apache.org/jira/browse/YARN-6563 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Rohith Sharma K S >Assignee: Haibo Chen > Fix For: YARN-5355, YARN-5355-branch-2, 3.0.0-alpha3 > > Attachments: YARN-6563.00.patch > > > It is seen that ConcurrentModificationException while stopping RM when ATSv2 > enabled. > {noformat} > 2017-05-05 15:04:11,563 WARN org.apache.hadoop.service.CompositeService: When > stopping the service > org.apache.hadoop.yarn.server.resourcemanager.timelineservice.RMTimelineCollectorManager > : java.util.ConcurrentModificationException > java.util.ConcurrentModificationException > at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437) > at java.util.HashMap$ValueIterator.next(HashMap.java:1466) > at > org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorManager.serviceStop(TimelineCollectorManager.java:222) > at > org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221) > at > org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52) > at > org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80) > at > org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157) > at > org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1285) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue
[ https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003254#comment-16003254 ] Eric Payne commented on YARN-2113: -- [~sunilg] / [~leftnoteasy]. It looks like the latest patch has all of the features we agreed on. Thanks [~sunilg] for all of your hard work on this feature and [~leftnoteasy] for all of your reviews and valuable feedback. > Add cross-user preemption within CapacityScheduler's leaf-queue > --- > > Key: YARN-2113 > URL: https://issues.apache.org/jira/browse/YARN-2113 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler >Reporter: Vinod Kumar Vavilapalli >Assignee: Sunil G > Attachments: IntraQueue Preemption-Impact Analysis.pdf, > TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt, > YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, > YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, > YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, > YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, > YARN-2113.0013.patch, YARN-2113.0014.patch, > YARN-2113.apply.onto.0012.ericp.patch, YARN-2113 Intra-QueuePreemption > Behavior.pdf, YARN-2113.v0.patch > > > Preemption today only works across queues and moves around resources across > queues per demand and usage. We should also have user-level preemption within > a queue, to balance capacity across users in a predictable manner. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5413) Create a proxy chain for ResourceManager Admin API in the Router
[ https://issues.apache.org/jira/browse/YARN-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-5413: --- Attachment: (was: YARN-5413-YARN-2915.v3.patch) > Create a proxy chain for ResourceManager Admin API in the Router > > > Key: YARN-5413 > URL: https://issues.apache.org/jira/browse/YARN-5413 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-5413-YARN-2915.v1.patch, > YARN-5413-YARN-2915.v2.patch > > > As detailed in the proposal in the umbrella JIRA, we are introducing a new > component that routes client request to appropriate ResourceManager(s). This > JIRA tracks the creation of a proxy for ResourceManager Admin API in the > Router. This provides a placeholder for: > 1) throttling mis-behaving clients (YARN-1546) > 3) mask the access to multiple RMs (YARN-3659) > We are planning to follow the interceptor pattern like we did in YARN-2884 to > generalize the approach and have only dynamically coupling for Federation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5413) Create a proxy chain for ResourceManager Admin API in the Router
[ https://issues.apache.org/jira/browse/YARN-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-5413: --- Attachment: YARN-5413-YARN-2915.v3.patch > Create a proxy chain for ResourceManager Admin API in the Router > > > Key: YARN-5413 > URL: https://issues.apache.org/jira/browse/YARN-5413 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-5413-YARN-2915.v1.patch, > YARN-5413-YARN-2915.v2.patch, YARN-5413-YARN-2915.v3.patch > > > As detailed in the proposal in the umbrella JIRA, we are introducing a new > component that routes client request to appropriate ResourceManager(s). This > JIRA tracks the creation of a proxy for ResourceManager Admin API in the > Router. This provides a placeholder for: > 1) throttling mis-behaving clients (YARN-1546) > 3) mask the access to multiple RMs (YARN-3659) > We are planning to follow the interceptor pattern like we did in YARN-2884 to > generalize the approach and have only dynamically coupling for Federation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6545) Followup fix for YARN-6405
[ https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16003228#comment-16003228 ] Hadoop QA commented on YARN-6545: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 7s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 23s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 18s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 56s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 3s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 3 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 55s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager in yarn-native-services has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} yarn-native-services passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 9m 12s{color} | {color:red} hadoop-yarn-project_hadoop-yarn generated 6 new + 65 unchanged - 6 fixed = 71 total (was 71) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch generated 11 new + 675 unchanged - 9 fixed = 686 total (was 684) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 12s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 26s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 59s{color} | {color:green} hadoop-yarn-slider-core in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 79m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6545 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12866986/YARN-6545.yarn-native-services.07.patch | | Optional Tests |