[jira] [Commented] (YARN-6481) Yarn top shows negative container number in FS
[ https://issues.apache.org/jira/browse/YARN-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992460#comment-15992460 ] Yufei Gu commented on YARN-6481: LGTM. +1 (non-binding). cc committers [~kasha], [~templedf]. > Yarn top shows negative container number in FS > -- > > Key: YARN-6481 > URL: https://issues.apache.org/jira/browse/YARN-6481 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 2.9.0 >Reporter: Yufei Gu >Assignee: Tao Jie > Labels: newbie > Attachments: YARN-6481.001.patch, YARN-6481.002.patch > > > yarn top shows negative container numbers and they didn't change even they > were supposed to. > {code} > NodeManager(s): 2 total, 2 active, 0 unhealthy, 0 decommissioned, 0 lost, 0 > rebooted > Queue(s) Applications: 0 running, 12 submitted, 0 pending, 12 completed, 0 > killed, 0 failed > Queue(s) Mem(GB): 0 available, 0 allocated, 0 pending, 0 reserved > Queue(s) VCores: 0 available, 0 allocated, 0 pending, 0 reserved > Queue(s) Containers: -2 allocated, -2 pending, -2 reserved > APPLICATIONID USER TYPE QUEUE #CONT > #RCONT VCORES RVC > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (YARN-6514) Fail to launch container when distributed scheduling is enabled
[ https://issues.apache.org/jira/browse/YARN-6514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Lv updated YARN-6514: --- Comment: was deleted (was: [~asuresh], I set this option on NM nodes, and keep {{yarn-site.xml}} on the RM machine unchanged. The same error occurred. The last INFO before the AMRMToken error is {{org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at h3master/192.168.0.165:8030}}. So I think AM did connect to the Proxy instead of directly to the RM.) > Fail to launch container when distributed scheduling is enabled > --- > > Key: YARN-6514 > URL: https://issues.apache.org/jira/browse/YARN-6514 > Project: Hadoop YARN > Issue Type: Bug > Components: distributed-scheduling, yarn >Affects Versions: 3.0.0-alpha2 > Environment: Ubuntu Linux 4.4.0-72-generic with java-8-openjdk-amd64 > 1.8.0_121 >Reporter: Zheng Lv > > When yarn.nodemanager.distributed-scheduling.enabled is set to true, > mapreduce fails to launch with Invalid AMRMToken errors. > This error does not occur when the distributed scheduling option is disabled. > {code:title=yarn-site.xml|borderStyle=solid} > > > > > > yarn.resourcemanager.hostname > h3master > > > yarn.nodemanager.aux-services > mapreduce_shuffle > > > yarn.nodemanager.env-whitelist > > JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME > > > yarn.nodemanager.aux-services > mapreduce_shuffle > > > yarn.nodemanager.vmem-check-enabled > false > > > > yarn.resourcemanager.opportunistic-container-allocation.enabled > true > > > > yarn.nodemanager.opportunistic-containers-max-queue-length > 10 > > > yarn.nodemanager.distributed-scheduling.enabled > true > > > yarn.nodemanager.amrmproxy.enable > true > > > > yarn.resourcemanager.opportunistic-container-allocation.enabled > true > > > yarn.nodemanager.resource.memory-mb > 4096 > > > {code} > {code:title=Container Log|borderStyle=solid} > 2017-04-23 05:17:50,324 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for > application appattempt_1492953411349_0001_02 > 2017-04-23 05:17:51,625 INFO [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: > / > [system properties] > os.name: Linux > os.version: 4.4.0-72-generic > java.home: /usr/lib/jvm/java-8-openjdk-amd64/jre > java.runtime.version: 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13 > java.vendor: Oracle Corporation > java.version: 1.8.0_121 > java.vm.name: OpenJDK 64-Bit Server VM > java.class.path: > /tmp/hadoop-administrator/nm-local-dir/usercache/administrator/appcache/application_1492953411349_0001/container_1492953411349_0001_02_01:/home/administrator/hadoop-3.0.0-alpha2/etc/hadoop:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/hadoop-common-3.0.0-alpha2.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/hadoop-common-3.0.0-alpha2-tests.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/hadoop-nfs-3.0.0-alpha2.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/hadoop-kms-3.0.0-alpha2.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/kerb-server-1.0.0-RC2.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/jetty-servlet-9.3.11.v20160721.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/jetty-xml-9.3.11.v20160721.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/jsp-api-2.1.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/jline-0.9.94.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/kerb-common-1.0.0-RC2.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/xmlenc-0.52.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/kerb-simplekdc-1.0.0-RC2.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/jetty-webapp-9.3.11.v20160721.jar:/home/administrator/hadoop-3.0.0-alpha2/share/hadoop/common/lib/jetty-http-9.3.11.v20160721.jar:/home/admin
[jira] [Commented] (YARN-6521) Yarn Log-aggregation Transform Enable not to Spam the NameNode
[ https://issues.apache.org/jira/browse/YARN-6521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992496#comment-15992496 ] zhangyubiao commented on YARN-6521: --- I find the YARN-4904 also solve the problem. > Yarn Log-aggregation Transform Enable not to Spam the NameNode > -- > > Key: YARN-6521 > URL: https://issues.apache.org/jira/browse/YARN-6521 > Project: Hadoop YARN > Issue Type: Improvement >Reporter: zhangyubiao > > Nowdays We have large cluster,with the apps incr and container incr ,We split > an namespace to store applogs. > But the log-aggregation /tmp/app-logs incr also make the namespace responce > slow. > We want to chang yarn.log-aggregation-enable true -> false,But Transform the > the yarn log cli service also can get the app-logs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6543) yarn application's privilege is determined by yarn process creator instead of yarn application user.
wuchang created YARN-6543: - Summary: yarn application's privilege is determined by yarn process creator instead of yarn application user. Key: YARN-6543 URL: https://issues.apache.org/jira/browse/YARN-6543 Project: Hadoop YARN Issue Type: Bug Reporter: wuchang My application is a pyspark application which is impersonated by user 'wuchang' My application infomation is : {code} Application Report : Application-Id : application_1493004858240_0007 Application-Name : livy-session-6 Application-Type : SPARK User : wuchang Queue : root.wuchang Start-Time : 1493708942748 Finish-Time : 0 Progress : 10% State : RUNNING Final-State : UNDEFINED Tracking-URL : http://10.120.241.82:34462 RPC Port : 0 AM Host : 10.120.241.82 Aggregate Resource Allocation : 4369480 MB-seconds, 2131 vcore-seconds Diagnostics : {code} And the process is : {code} appuser 25454 25872 0 15:09 ?00:00:00 bash /data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/default_container_executor.sh appuser 25456 25454 0 15:09 ?00:00:00 /bin/bash -c /home/jdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=40969' -Dspark.yarn.app.container.log.dir=/home/log/hadoop/logs/userlogs/application_1493004858240_0007/container_1493004858240_0007_01_04 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.120.241.82:40969 --executor-id 2 --hostname 10.120.241.18 --cores 1 --app-id application_1493004858240_0007 --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/__app__.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-api-0.3.0-SNAPSHOT.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-rsc-0.3.0-SNAPSHOT.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/netty-all-4.0.29.Final.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/commons-codec-1.9.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-core_2.11-0.3.0-SNAPSHOT.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-repl_2.11-0.3.0-SNAPSHOT.jar 1> /home/log/hadoop/logs/userlogs/application_1493004858240_0007/container_1493004858240_0007_01_04/stdout 2> /home/log/hadoop/logs/userlogs/application_1493004858240_0007/container_1493004858240_0007_01_04/stderr appuser 25468 25456 2 15:09 ?00:00:09 /home/jdk/bin/java -server -Xmx1024m -Djava.io.tmpdir=/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/tmp -Dspark.ui.port=0 -Dspark.driver.port=40969 -Dspark.yarn.app.container.log.dir=/home/log/hadoop/logs/userlogs/application_1493004858240_0007/container_1493004858240_0007_01_04 -XX:OnOutOfMemoryError=kill %p org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@10.120.241.82:40969 --executor-id 2 --hostname 10.120.241.18 --cores 1 --app-id application_1493004858240_0007 --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/__app__.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-api-0.3.0-SNAPSHOT.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-rsc-0.3.0-SNAPSHOT.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/netty-all-4.0.29.Final.jar --user-class-path file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/co
[jira] [Commented] (YARN-6419) Support to launch native-service deployment from new YARN UI
[ https://issues.apache.org/jira/browse/YARN-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992535#comment-15992535 ] Sunil G commented on YARN-6419: --- +1 from my side. I tested latest patch against native-service branch. Looks good. [~jianhe], could you also please take a look. Thank You. > Support to launch native-service deployment from new YARN UI > > > Key: YARN-6419 > URL: https://issues.apache.org/jira/browse/YARN-6419 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB > Attachments: Screenshot-deploy-new-service-form-input.png, > Screenshot-deploy-new-service-json-input.png, > Screenshot-deploy-service-add-component-form-input.png, YARN-6419.001.patch, > YARN-6419.002.patch, YARN-6419.003.patch, YARN-6419.004.patch, > YARN-6419-yarn-native-services.001.patch, > YARN-6419-yarn-native-services.002.patch, > YARN-6419-yarn-native-services.003.patch, > YARN-6419-yarn-native-services.004.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6519) Fix warnings from Spotbugs in hadoop-yarn-server-resourcemanager
[ https://issues.apache.org/jira/browse/YARN-6519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992552#comment-15992552 ] Naganarasimha G R commented on YARN-6519: - thanks for the patch [~cheersyang], Test case failures are not related to the patch, committing it shortly ! > Fix warnings from Spotbugs in hadoop-yarn-server-resourcemanager > > > Key: YARN-6519 > URL: https://issues.apache.org/jira/browse/YARN-6519 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: findbugs > Attachments: YARN-6519.001.patch, YARN-6519.002.patch, > YARN-6519-branch-2.001.patch > > > There is 8 findbugs warning in hadoop-yarn-server-timelineservice since > switched to spotbugs > # > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager$1.compare(CSQueue, > CSQueue) incorrectly handles float value > # org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType.index > field is public and mutable > # > org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.EMPTY_CONTAINER_LIST > is a mutable collection which should be package protected > # > org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.EMPTY_CONTAINER_LIST > is a mutable collection which should be package protected > # > org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.queueMetrics > is a mutable collection > # > org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.cleanupStaledPreemptionCandidates(long) > makes inefficient use of keySet iterator instead of entrySet iterator > # > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.transferStateFromAttempt(RMAppAttempt) > makes inefficient use of keySet iterator instead of entrySet iterator > # > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.cleanupPreemptionList() > makes inefficient use of keySet iterator instead of entrySet iterator > See more from > [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only
[ https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manikandan R updated YARN-6467: --- Attachment: YARN-6467.002.patch > CSQueueMetrics needs to update the current metrics for default partition only > - > > Key: YARN-6467 > URL: https://issues.apache.org/jira/browse/YARN-6467 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2 >Reporter: Naganarasimha G R >Assignee: Manikandan R > Attachments: YARN-6467.001.patch, YARN-6467.001.patch, > YARN-6467.002.patch > > > As a followup to YARN-6195, we need to update existing metrics to only > default Partition. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only
[ https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992657#comment-15992657 ] Manikandan R commented on YARN-6467: Fixed checkstyle, whitespace & javadoc comments. Junit failures is not related to this patch. Attaching new patch with the changes. (On a separate note, my earlier patch name version no. was not correct, it should have been .002, not .001. However, jenkins picked up the recent patch and ran tests.) > CSQueueMetrics needs to update the current metrics for default partition only > - > > Key: YARN-6467 > URL: https://issues.apache.org/jira/browse/YARN-6467 > Project: Hadoop YARN > Issue Type: Bug > Components: capacity scheduler >Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha2 >Reporter: Naganarasimha G R >Assignee: Manikandan R > Attachments: YARN-6467.001.patch, YARN-6467.001.patch, > YARN-6467.002.patch > > > As a followup to YARN-6195, we need to update existing metrics to only > default Partition. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6543) yarn application's privilege is determined by yarn process creator instead of yarn application user.
[ https://issues.apache.org/jira/browse/YARN-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992668#comment-15992668 ] Rohith Sharma K S commented on YARN-6543: - This is default behavior of YARN which uses DefaultContainerExeuctor as a default. For achieving your usecase, you can use LinuxContainerExecutor. The details about configuring LCE is given in the doc, refer [LCE|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html#LinuxContainerExecutor]. > yarn application's privilege is determined by yarn process creator instead of > yarn application user. > > > Key: YARN-6543 > URL: https://issues.apache.org/jira/browse/YARN-6543 > Project: Hadoop YARN > Issue Type: Bug >Reporter: wuchang > > My application is a pyspark application which is impersonated by user > 'wuchang' > My application infomation is : > {code} > Application Report : > Application-Id : application_1493004858240_0007 > Application-Name : livy-session-6 > Application-Type : SPARK > User : wuchang > Queue : root.wuchang > Start-Time : 1493708942748 > Finish-Time : 0 > Progress : 10% > State : RUNNING > Final-State : UNDEFINED > Tracking-URL : http://10.120.241.82:34462 > RPC Port : 0 > AM Host : 10.120.241.82 > Aggregate Resource Allocation : 4369480 MB-seconds, 2131 vcore-seconds > Diagnostics : > {code} > And the process is : > {code} > appuser 25454 25872 0 15:09 ?00:00:00 bash > /data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/default_container_executor.sh > appuser 25456 25454 0 15:09 ?00:00:00 /bin/bash -c > /home/jdk/bin/java -server -Xmx1024m > -Djava.io.tmpdir=/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/tmp > '-Dspark.ui.port=0' '-Dspark.driver.port=40969' > -Dspark.yarn.app.container.log.dir=/home/log/hadoop/logs/userlogs/application_1493004858240_0007/container_1493004858240_0007_01_04 > -XX:OnOutOfMemoryError='kill %p' > org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url > spark://CoarseGrainedScheduler@10.120.241.82:40969 --executor-id 2 --hostname > 10.120.241.18 --cores 1 --app-id application_1493004858240_0007 > --user-class-path > file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/__app__.jar > --user-class-path > file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-api-0.3.0-SNAPSHOT.jar > --user-class-path > file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-rsc-0.3.0-SNAPSHOT.jar > --user-class-path > file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/netty-all-4.0.29.Final.jar > --user-class-path > file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/commons-codec-1.9.jar > --user-class-path > file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-core_2.11-0.3.0-SNAPSHOT.jar > --user-class-path > file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/livy-repl_2.11-0.3.0-SNAPSHOT.jar > 1> > /home/log/hadoop/logs/userlogs/application_1493004858240_0007/container_1493004858240_0007_01_04/stdout > 2> > /home/log/hadoop/logs/userlogs/application_1493004858240_0007/container_1493004858240_0007_01_04/stderr > appuser 25468 25456 2 15:09 ?00:00:09 /home/jdk/bin/java -server > -Xmx1024m > -Djava.io.tmpdir=/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/application_1493004858240_0007/container_1493004858240_0007_01_04/tmp > -Dspark.ui.port=0 -Dspark.driver.port=40969 > -Dspark.yarn.app.container.log.dir=/home/log/hadoop/logs/userlogs/application_1493004858240_0007/container_1493004858240_0007_01_04 > -XX:OnOutOfMemoryError=kill %p > org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url > spark://CoarseGrainedScheduler@10.120.241.82:40969 --executor-id 2 --hostname > 10.120.241.18 --cores 1 --app-id application_1493004858240_0007 > --user-class-path > file:/data/data/hadoop/tmp/nm-local-dir/usercache/wuchang/appcache/applic
[jira] [Updated] (YARN-6398) Implement a new native-service UI
[ https://issues.apache.org/jira/browse/YARN-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akhil PB updated YARN-6398: --- Attachment: YARN-6398-yarn-native-services.001.patch YARN-6398 v1 patch rebased on yarn-native-services branch. > Implement a new native-service UI > - > > Key: YARN-6398 > URL: https://issues.apache.org/jira/browse/YARN-6398 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Reporter: Sunil G >Assignee: Akhil PB > Attachments: YARN-6398.001.patch, YARN-6398.002.patch, > YARN-6398.003.patch, YARN-6398-yarn-native-services.001.patch > > > Create a new and advanced native service UI which can co-exist with the new > Yarn UI. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6467) CSQueueMetrics needs to update the current metrics for default partition only
[ https://issues.apache.org/jira/browse/YARN-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992710#comment-15992710 ] Hadoop QA commented on YARN-6467: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 585 unchanged - 6 fixed = 587 total (was 591) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 48s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 32s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 48s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6467 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12865933/YARN-6467.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b918fe1c2c21 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b0f54ea | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15795/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15795/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15795/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://builds.apache.org/job/PreCommit-YAR
[jira] [Commented] (YARN-6481) Yarn top shows negative container number in FS
[ https://issues.apache.org/jira/browse/YARN-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992956#comment-15992956 ] Daniel Templeton commented on YARN-6481: Looks like this change should have been made in YARN-3961. Thanks for adding it, [~Tao Jie]. +1 I'll commit when I get a chance. > Yarn top shows negative container number in FS > -- > > Key: YARN-6481 > URL: https://issues.apache.org/jira/browse/YARN-6481 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 2.9.0 >Reporter: Yufei Gu >Assignee: Tao Jie > Labels: newbie > Attachments: YARN-6481.001.patch, YARN-6481.002.patch > > > yarn top shows negative container numbers and they didn't change even they > were supposed to. > {code} > NodeManager(s): 2 total, 2 active, 0 unhealthy, 0 decommissioned, 0 lost, 0 > rebooted > Queue(s) Applications: 0 running, 12 submitted, 0 pending, 12 completed, 0 > killed, 0 failed > Queue(s) Mem(GB): 0 available, 0 allocated, 0 pending, 0 reserved > Queue(s) VCores: 0 available, 0 allocated, 0 pending, 0 reserved > Queue(s) Containers: -2 allocated, -2 pending, -2 reserved > APPLICATIONID USER TYPE QUEUE #CONT > #RCONT VCORES RVC > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6544) Add Null check RegistryDNS service while parsing registry records
Karam Singh created YARN-6544: - Summary: Add Null check RegistryDNS service while parsing registry records Key: YARN-6544 URL: https://issues.apache.org/jira/browse/YARN-6544 Project: Hadoop YARN Issue Type: Sub-task Components: yarn-native-services Affects Versions: YARN-4757 Reporter: Karam Singh Fix For: YARN-4757 Add Null check RegistryDNS service while parsing registry records for Yarn persistance attribute. As of now It assumes that yarn registry record always contain yarn persistance which is not the case -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6544) Add Null check RegistryDNS service while parsing registry records
[ https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karam Singh updated YARN-6544: -- Affects Version/s: (was: YARN-4757) yarn-native-services > Add Null check RegistryDNS service while parsing registry records > - > > Key: YARN-6544 > URL: https://issues.apache.org/jira/browse/YARN-6544 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Affects Versions: yarn-native-services >Reporter: Karam Singh > Fix For: yarn-native-services > > > Add Null check RegistryDNS service while parsing registry records for Yarn > persistance attribute. > As of now It assumes that yarn registry record always contain yarn > persistance which is not the case -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6544) Add Null check RegistryDNS service while parsing registry records
[ https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karam Singh updated YARN-6544: -- Fix Version/s: (was: YARN-4757) yarn-native-services > Add Null check RegistryDNS service while parsing registry records > - > > Key: YARN-6544 > URL: https://issues.apache.org/jira/browse/YARN-6544 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Affects Versions: yarn-native-services >Reporter: Karam Singh > Fix For: yarn-native-services > > > Add Null check RegistryDNS service while parsing registry records for Yarn > persistance attribute. > As of now It assumes that yarn registry record always contain yarn > persistance which is not the case -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6544) Add Null check RegistryDNS service while parsing registry records
[ https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karam Singh updated YARN-6544: -- Attachment: YARN-6544-yarn-native-services.001.patch Initial patch to add Null check in dns service while parsing reigstry record for yarn persistance attribute > Add Null check RegistryDNS service while parsing registry records > - > > Key: YARN-6544 > URL: https://issues.apache.org/jira/browse/YARN-6544 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Affects Versions: yarn-native-services >Reporter: Karam Singh > Fix For: yarn-native-services > > Attachments: YARN-6544-yarn-native-services.001.patch > > > Add Null check RegistryDNS service while parsing registry records for Yarn > persistance attribute. > As of now It assumes that yarn registry record always contain yarn > persistance which is not the case -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6419) Support to launch native-service deployment from new YARN UI
[ https://issues.apache.org/jira/browse/YARN-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993180#comment-15993180 ] Jian He commented on YARN-6419: --- lgtm, thanks > Support to launch native-service deployment from new YARN UI > > > Key: YARN-6419 > URL: https://issues.apache.org/jira/browse/YARN-6419 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-ui-v2 >Reporter: Akhil PB >Assignee: Akhil PB > Attachments: Screenshot-deploy-new-service-form-input.png, > Screenshot-deploy-new-service-json-input.png, > Screenshot-deploy-service-add-component-form-input.png, YARN-6419.001.patch, > YARN-6419.002.patch, YARN-6419.003.patch, YARN-6419.004.patch, > YARN-6419-yarn-native-services.001.patch, > YARN-6419-yarn-native-services.002.patch, > YARN-6419-yarn-native-services.003.patch, > YARN-6419-yarn-native-services.004.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6398) Implement a new native-service UI
[ https://issues.apache.org/jira/browse/YARN-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993195#comment-15993195 ] Hadoop QA commented on YARN-6398: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s{color} | {color:red} YARN-6398 does not apply to yarn-native-services. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | YARN-6398 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12865937/YARN-6398-yarn-native-services.001.patch | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15797/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Implement a new native-service UI > - > > Key: YARN-6398 > URL: https://issues.apache.org/jira/browse/YARN-6398 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Reporter: Sunil G >Assignee: Akhil PB > Attachments: YARN-6398.001.patch, YARN-6398.002.patch, > YARN-6398.003.patch, YARN-6398-yarn-native-services.001.patch > > > Create a new and advanced native service UI which can co-exist with the new > Yarn UI. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6545) Followup fix for YARN-6405
Jian He created YARN-6545: - Summary: Followup fix for YARN-6405 Key: YARN-6545 URL: https://issues.apache.org/jira/browse/YARN-6545 Project: Hadoop YARN Issue Type: Bug Reporter: Jian He Assignee: Jian He -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6545) Followup fix for YARN-6405
[ https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6545: -- Issue Type: Sub-task (was: Bug) Parent: YARN-5079 > Followup fix for YARN-6405 > -- > > Key: YARN-6545 > URL: https://issues.apache.org/jira/browse/YARN-6545 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6545) Followup fix for YARN-6405
[ https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6545: -- Attachment: YARN-6545.yarn-native-services.01.patch > Followup fix for YARN-6405 > -- > > Key: YARN-6545 > URL: https://issues.apache.org/jira/browse/YARN-6545 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-6545.yarn-native-services.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6409) RM does not blacklist node for AM launch failures
[ https://issues.apache.org/jira/browse/YARN-6409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993457#comment-15993457 ] Haibo Chen commented on YARN-6409: -- Yeah, NMs can be unresponsive during the heartbeat window, from what we can tell. > RM does not blacklist node for AM launch failures > - > > Key: YARN-6409 > URL: https://issues.apache.org/jira/browse/YARN-6409 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Affects Versions: 3.0.0-alpha2 >Reporter: Haibo Chen >Assignee: Haibo Chen > Attachments: YARN-6409.00.patch, YARN-6409.01.patch > > > Currently, node blacklisting upon AM failures only handles failures that > happen after AM container is launched (see > RMAppAttemptImpl.shouldCountTowardsNodeBlacklisting()). However, AM launch > can also fail if the NM, where the AM container is allocated, goes > unresponsive. Because it is not handled, scheduler may continue to allocate > AM containers on that same NM for the following app attempts. > {code} > Application application_1478721503753_0870 failed 2 times due to Error > launching appattempt_1478721503753_0870_02. Got exception: > java.io.IOException: Failed on local exception: java.io.IOException: > java.net.SocketTimeoutException: 6 millis timeout while waiting for > channel to be ready for read. ch : java.nio.channels.SocketChannel[connected > local=/17.111.179.113:46702 remote=*.me.com/17.111.178.125:8041]; Host > Details : local host is: "*.me.com/17.111.179.113"; destination host is: > "*.me.com":8041; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1475) > at org.apache.hadoop.ipc.Client.call(Client.java:1408) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > > at com.sun.proxy.$Proxy86.startContainers(Unknown Source) > at > org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96) > > at sun.reflect.GeneratedMethodAccessor155.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) > > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) > > at com.sun.proxy.$Proxy87.startContainers(Unknown Source) > at > org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:120) > > at > org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:256) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: java.net.SocketTimeoutException: 6 millis > timeout while waiting for channel to be ready for read. ch : > java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 > remote=*.me.com/17.111.178.125:8041] > at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) > > at > org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:650) > > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738) > at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1524) > at org.apache.hadoop.ipc.Client.call(Client.java:1447) > ... 15 more > Caused by: java.net.SocketTimeoutException: 6 millis timeout while > waiting for channel to be ready for read. ch : > java.nio.channels.SocketChannel[connected local=/17.111.179.113:46702 > remote=*.me.com/17.111.178.125:8041] > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) > at java.io.FilterInputStream.read(FilterInputStream.java:133) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) > at java.io.BufferedInputStream.read(BufferedInputStream.java:265) > at java.io.DataInputStream.readInt(DataInputStream.java:387) > at > org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367) > at > org.apache.ha
[jira] [Updated] (YARN-6545) Followup fix for YARN-6405
[ https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6545: -- Attachment: YARN-6545.yarn-native-services.01.patch > Followup fix for YARN-6405 > -- > > Key: YARN-6545 > URL: https://issues.apache.org/jira/browse/YARN-6545 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-6545.yarn-native-services.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6545) Followup fix for YARN-6405
[ https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-6545: -- Attachment: (was: YARN-6545.yarn-native-services.01.patch) > Followup fix for YARN-6405 > -- > > Key: YARN-6545 > URL: https://issues.apache.org/jira/browse/YARN-6545 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jian He >Assignee: Jian He > Attachments: YARN-6545.yarn-native-services.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6546) SLS is slow while loading 10k queues
Yufei Gu created YARN-6546: -- Summary: SLS is slow while loading 10k queues Key: YARN-6546 URL: https://issues.apache.org/jira/browse/YARN-6546 Project: Hadoop YARN Issue Type: Sub-task Components: scheduler-load-simulator Reporter: Yufei Gu Assignee: Yufei Gu It takes a long time (more than 10 minutes) to load 10k queues in SLS. The problem should be in {{com.codahale.metrics.CsvReporter}} based on the result from profiler. SLS creates 14 .csv files for each leaf queue. It is not necessary to log information for inactive queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6546) SLS is slow while loading 10k queues
[ https://issues.apache.org/jira/browse/YARN-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6546: --- Description: It takes a long time (more than 10 minutes) to load 10k queues in SLS. The problem should be in {{com.codahale.metrics.CsvReporter}} based on the result from profiler. SLS creates 14 .csv files for each leaf queue, and update them constantly during execution. It is not necessary to log information for inactive queues. (was: It takes a long time (more than 10 minutes) to load 10k queues in SLS. The problem should be in {{com.codahale.metrics.CsvReporter}} based on the result from profiler. SLS creates 14 .csv files for each leaf queue. It is not necessary to log information for inactive queues. ) > SLS is slow while loading 10k queues > > > Key: YARN-6546 > URL: https://issues.apache.org/jira/browse/YARN-6546 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Reporter: Yufei Gu >Assignee: Yufei Gu > > It takes a long time (more than 10 minutes) to load 10k queues in SLS. The > problem should be in {{com.codahale.metrics.CsvReporter}} based on the result > from profiler. SLS creates 14 .csv files for each leaf queue, and update them > constantly during execution. It is not necessary to log information for > inactive queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6546) SLS is slow while loading 10k queues
[ https://issues.apache.org/jira/browse/YARN-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6546: --- Attachment: Desktop.png > SLS is slow while loading 10k queues > > > Key: YARN-6546 > URL: https://issues.apache.org/jira/browse/YARN-6546 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: Desktop.png > > > It takes a long time (more than 10 minutes) to load 10k queues in SLS. The > problem should be in {{com.codahale.metrics.CsvReporter}} based on the result > from profiler. SLS creates 14 .csv files for each leaf queue, and update them > constantly during execution. It is not necessary to log information for > inactive queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6546) SLS is slow while loading 10k queues
[ https://issues.apache.org/jira/browse/YARN-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6546: --- Affects Version/s: 3.0.0-alpha2 > SLS is slow while loading 10k queues > > > Key: YARN-6546 > URL: https://issues.apache.org/jira/browse/YARN-6546 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: Desktop.png > > > It takes a long time (more than 10 minutes) to load 10k queues in SLS. The > problem should be in {{com.codahale.metrics.CsvReporter}} based on the result > from profiler. SLS creates 14 .csv files for each leaf queue, and update them > constantly during execution. It is not necessary to log information for > inactive queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6544) Add Null check RegistryDNS service while parsing registry records
[ https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993488#comment-15993488 ] Hadoop QA commented on YARN-6544: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 12s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 30s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 11s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry: The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6544 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12865992/YARN-6544-yarn-native-services.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7e0c61fabfca 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / d23a97d | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15798/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15798/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15798/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add Null check RegistryDNS ser
[jira] [Commented] (YARN-6545) Followup fix for YARN-6405
[ https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993523#comment-15993523 ] Hadoop QA commented on YARN-6545: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 2s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 3s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core: The patch generated 2 new + 132 unchanged - 1 fixed = 134 total (was 133) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 36s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | slider.server.appmaster.timelineservice.TestServiceTimelinePublisher | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6545 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12866023/YARN-6545.yarn-native-services.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 2c6c402cb253 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / d23a97d | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15799/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15799/artifact/patchprocess/diff-check
[jira] [Commented] (YARN-6545) Followup fix for YARN-6405
[ https://issues.apache.org/jira/browse/YARN-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993528#comment-15993528 ] Hadoop QA commented on YARN-6545: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 32s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s{color} | {color:green} yarn-native-services passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core in yarn-native-services has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} yarn-native-services passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-slider/hadoop-yarn-slider-core: The patch generated 2 new + 133 unchanged - 1 fixed = 135 total (was 134) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 30s{color} | {color:red} hadoop-yarn-slider-core in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | slider.server.appmaster.timelineservice.TestServiceTimelinePublisher | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:0ac17dc | | JIRA Issue | YARN-6545 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12866023/YARN-6545.yarn-native-services.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cadf10044e42 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | yarn-native-services / d23a97d | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15800/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-slider_hadoop-yarn-slider-core-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15800/artifact/patchprocess/diff-checks
[jira] [Commented] (YARN-5301) NM mount cpu cgroups failed on some systems
[ https://issues.apache.org/jira/browse/YARN-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993662#comment-15993662 ] Daniel Templeton commented on YARN-5301: Latest patch LGTM. > NM mount cpu cgroups failed on some systems > --- > > Key: YARN-5301 > URL: https://issues.apache.org/jira/browse/YARN-5301 > Project: Hadoop YARN > Issue Type: Bug >Reporter: sandflee >Assignee: Miklos Szegedi > Attachments: YARN-5301.000.patch, YARN-5301.001.patch, > YARN-5301.002.patch, YARN-5301.003.patch, YARN-5301.004.patch, > YARN-5301.005.patch, YARN-5301.006.patch, YARN-5301.007.patch, > YARN-5301.008.patch, YARN-5301.009.patch, YARN-5301.010.patch > > > on ubuntu with linux kernel 3.19, , NM start failed if enable auto mount > cgroup. try command: > ./bin/container-executor --mount-cgroups yarn-hadoop cpu=/cgroup/cpufail > ./bin/container-executor --mount-cgroups yarn-hadoop cpu,cpuacct=/cgroup/cpu > succ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6481) Yarn top shows negative container number in FS
[ https://issues.apache.org/jira/browse/YARN-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993700#comment-15993700 ] Hudson commented on YARN-6481: -- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11674 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11674/]) YARN-6481. Yarn top shows negative container number in FS (Contributed (templedf: rev 9f0aea0ee2c680afd26ef9da6ac662be00d8e24f) * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java > Yarn top shows negative container number in FS > -- > > Key: YARN-6481 > URL: https://issues.apache.org/jira/browse/YARN-6481 > Project: Hadoop YARN > Issue Type: Bug > Components: yarn >Affects Versions: 2.9.0 >Reporter: Yufei Gu >Assignee: Tao Jie > Labels: newbie > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-6481.001.patch, YARN-6481.002.patch > > > yarn top shows negative container numbers and they didn't change even they > were supposed to. > {code} > NodeManager(s): 2 total, 2 active, 0 unhealthy, 0 decommissioned, 0 lost, 0 > rebooted > Queue(s) Applications: 0 running, 12 submitted, 0 pending, 12 completed, 0 > killed, 0 failed > Queue(s) Mem(GB): 0 available, 0 allocated, 0 pending, 0 reserved > Queue(s) VCores: 0 available, 0 allocated, 0 pending, 0 reserved > Queue(s) Containers: -2 allocated, -2 pending, -2 reserved > APPLICATIONID USER TYPE QUEUE #CONT > #RCONT VCORES RVC > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6544) Add Null check RegistryDNS service while parsing registry records
[ https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993725#comment-15993725 ] Gour Saha commented on YARN-6544: - [~karams] please fix the checkstyle issues reported here - https://builds.apache.org/job/PreCommit-YARN-Build/15798/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt > Add Null check RegistryDNS service while parsing registry records > - > > Key: YARN-6544 > URL: https://issues.apache.org/jira/browse/YARN-6544 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Affects Versions: yarn-native-services >Reporter: Karam Singh > Fix For: yarn-native-services > > Attachments: YARN-6544-yarn-native-services.001.patch > > > Add Null check RegistryDNS service while parsing registry records for Yarn > persistance attribute. > As of now It assumes that yarn registry record always contain yarn > persistance which is not the case -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Assigned] (YARN-6544) Add Null check RegistryDNS service while parsing registry records
[ https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gour Saha reassigned YARN-6544: --- Assignee: Karam Singh > Add Null check RegistryDNS service while parsing registry records > - > > Key: YARN-6544 > URL: https://issues.apache.org/jira/browse/YARN-6544 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Affects Versions: yarn-native-services >Reporter: Karam Singh >Assignee: Karam Singh > Fix For: yarn-native-services > > Attachments: YARN-6544-yarn-native-services.001.patch > > > Add Null check RegistryDNS service while parsing registry records for Yarn > persistance attribute. > As of now It assumes that yarn registry record always contain yarn > persistance which is not the case -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6547) Enhance SLS-based tests leveraging invariant checker
Carlo Curino created YARN-6547: -- Summary: Enhance SLS-based tests leveraging invariant checker Key: YARN-6547 URL: https://issues.apache.org/jira/browse/YARN-6547 Project: Hadoop YARN Issue Type: Bug Reporter: Carlo Curino Assignee: Carlo Curino We can leverage {{InvariantChecker}}s to provide a more thorough validation of SLS-based tests. This patch introduces invariants checking during and at the end of the run. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6544) Add Null check RegistryDNS service while parsing registry records
[ https://issues.apache.org/jira/browse/YARN-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993770#comment-15993770 ] Jian He commented on YARN-6544: --- [~karams], we may add a warning log in the else case, if the data is null > Add Null check RegistryDNS service while parsing registry records > - > > Key: YARN-6544 > URL: https://issues.apache.org/jira/browse/YARN-6544 > Project: Hadoop YARN > Issue Type: Sub-task > Components: yarn-native-services >Affects Versions: yarn-native-services >Reporter: Karam Singh >Assignee: Karam Singh > Fix For: yarn-native-services > > Attachments: YARN-6544-yarn-native-services.001.patch > > > Add Null check RegistryDNS service while parsing registry records for Yarn > persistance attribute. > As of now It assumes that yarn registry record always contain yarn > persistance which is not the case -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6547) Enhance SLS-based tests leveraging invariant checker
[ https://issues.apache.org/jira/browse/YARN-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6547: --- Attachment: YARN-6547.v0.patch > Enhance SLS-based tests leveraging invariant checker > > > Key: YARN-6547 > URL: https://issues.apache.org/jira/browse/YARN-6547 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6547.v0.patch > > > We can leverage {{InvariantChecker}}s to provide a more thorough validation > of SLS-based tests. This patch introduces invariants checking during and at > the end of the run. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6547) Enhance SLS-based tests leveraging invariant checker
[ https://issues.apache.org/jira/browse/YARN-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993790#comment-15993790 ] Carlo Curino commented on YARN-6547: [~wangda] as promised, this patch provides a way to do "tighter" tests. It leverages the {{InvariantChecker}} both during execution, and at the end, to verify that the values in {{QueueMetrics}} and {{JvmMetrics}} match our expectations. The current set of invariants ({{ongoing-invariants.txt}} and {{exit-invariants.txt}}) are just place-holders, we should work together on defining the tightest but robust set of invariants. This is easier if we setup in the traces runs that are short enough to run to completion, otherwise (current situation in patch v0) the speed of simulation affects at which point we interrupt the run and the exit invariants are hard to make robust and tight at the same time. It might make sense to separate this patch (using loose invariants) from a patch that only changes the set of invariants and trace files to make them tighter and robust. > Enhance SLS-based tests leveraging invariant checker > > > Key: YARN-6547 > URL: https://issues.apache.org/jira/browse/YARN-6547 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6547.v0.patch > > > We can leverage {{InvariantChecker}}s to provide a more thorough validation > of SLS-based tests. This patch introduces invariants checking during and at > the end of the run. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6546) SLS is slow while loading 10k queues
[ https://issues.apache.org/jira/browse/YARN-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6546: --- Attachment: YARN-6546.001.patch > SLS is slow while loading 10k queues > > > Key: YARN-6546 > URL: https://issues.apache.org/jira/browse/YARN-6546 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: Desktop.png, YARN-6546.001.patch > > > It takes a long time (more than 10 minutes) to load 10k queues in SLS. The > problem should be in {{com.codahale.metrics.CsvReporter}} based on the result > from profiler. SLS creates 14 .csv files for each leaf queue, and update them > constantly during execution. It is not necessary to log information for > inactive queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6546) SLS is slow while loading 10k queues
[ https://issues.apache.org/jira/browse/YARN-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993827#comment-15993827 ] Yufei Gu commented on YARN-6546: Patch v1 remote the code to create metrics files for all leaf queues. All metrics files are still created for active queues. Based my tests, SLS get much better performance with the patch. > SLS is slow while loading 10k queues > > > Key: YARN-6546 > URL: https://issues.apache.org/jira/browse/YARN-6546 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: Desktop.png, YARN-6546.001.patch > > > It takes a long time (more than 10 minutes) to load 10k queues in SLS. The > problem should be in {{com.codahale.metrics.CsvReporter}} based on the result > from profiler. SLS creates 14 .csv files for each leaf queue, and update them > constantly during execution. It is not necessary to log information for > inactive queues. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-4812) TestFairScheduler#testContinuousScheduling fails intermittently
[ https://issues.apache.org/jira/browse/YARN-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-4812: - Fix Version/s: 2.8.2 Thanks, Karthik! I committed this to branch-2.8 as well. > TestFairScheduler#testContinuousScheduling fails intermittently > --- > > Key: YARN-4812 > URL: https://issues.apache.org/jira/browse/YARN-4812 > Project: Hadoop YARN > Issue Type: Bug > Components: fairscheduler >Reporter: Karthik Kambatla >Assignee: Karthik Kambatla > Fix For: 2.9.0, 3.0.0-alpha1, 2.8.2 > > Attachments: yarn-4812-1.patch > > > This test has failed in the past, and there seem to be more issues. > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.testContinuousScheduling(TestFairScheduler.java:3816) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5343) TestContinuousScheduling#testSortedNodes fails intermittently
[ https://issues.apache.org/jira/browse/YARN-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated YARN-5343: - Fix Version/s: 2.8.2 Thanks, [~yufeigu]! I committed this to branch-2.8 as well. > TestContinuousScheduling#testSortedNodes fails intermittently > - > > Key: YARN-5343 > URL: https://issues.apache.org/jira/browse/YARN-5343 > Project: Hadoop YARN > Issue Type: Test >Reporter: sandflee >Assignee: Yufei Gu >Priority: Minor > Fix For: 2.9.0, 3.0.0-alpha1, 2.8.2 > > Attachments: YARN-5343.001.patch > > > {noformat} > java.lang.AssertionError: expected:<2> but was:<1> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestContinuousScheduling.testSortedNodes(TestContinuousScheduling.java:167) > {noformat} > https://builds.apache.org/job/PreCommit-YARN-Build/12250/testReport/org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair/TestContinuousScheduling/testSortedNodes/ -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6547) Enhance SLS-based tests leveraging invariant checker
[ https://issues.apache.org/jira/browse/YARN-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993851#comment-15993851 ] Hadoop QA commented on YARN-6547: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 31s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 10s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 6 new + 0 unchanged - 0 fixed = 6 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 58s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.sls.TestSLSRunner | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6547 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12866053/YARN-6547.v0.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux df19e4c1a71a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9f0aea0 | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15801/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-sls-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15801/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15801/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-sls.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15801/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-YARN-Build/15801/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15801/console | | Powered by | Apache Yetus 0.5.0-S
[jira] [Updated] (YARN-6469) Extending Synthetic Load Generator and SLS for recurring reservation
[ https://issues.apache.org/jira/browse/YARN-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6469: --- Attachment: YARN-6469.v1.patch > Extending Synthetic Load Generator and SLS for recurring reservation > > > Key: YARN-6469 > URL: https://issues.apache.org/jira/browse/YARN-6469 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6469.v0.patch, YARN-6469.v1.patch > > > This JIRA extends the synthetic load generator, and SLS to support the > generation and submission of recurring jobs. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6546) SLS is slow while loading 10k queues
[ https://issues.apache.org/jira/browse/YARN-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993899#comment-15993899 ] Hadoop QA commented on YARN-6546: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 10s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 4 new + 12 unchanged - 0 fixed = 16 total (was 12) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 53s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6546 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12866056/YARN-6546.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1e231061a061 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cedaf4c | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15802/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-sls-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15802/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15802/testReport/ | | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15802/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > SLS is slow while loading 10k queues > > > Key: Y
[jira] [Updated] (YARN-6462) Add yarn command to list all queues
[ https://issues.apache.org/jira/browse/YARN-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated YARN-6462: --- Fix Version/s: (was: 3.0.0-alpha3) > Add yarn command to list all queues > --- > > Key: YARN-6462 > URL: https://issues.apache.org/jira/browse/YARN-6462 > Project: Hadoop YARN > Issue Type: Improvement > Components: client >Reporter: Shen Yinjie >Assignee: Shen Yinjie > Attachments: YARN-6462_1.patch, YARN-6462_2.patch > > > we need a yarn command to list all queues ,and there is this kind of command > for applications and nodemangers already... -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6251) Fix Scheduler locking issue introduced by YARN-6216
[ https://issues.apache.org/jira/browse/YARN-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated YARN-6251: --- Fix Version/s: (was: 3.0.0-alpha3) > Fix Scheduler locking issue introduced by YARN-6216 > --- > > Key: YARN-6251 > URL: https://issues.apache.org/jira/browse/YARN-6251 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Arun Suresh >Assignee: Arun Suresh > Attachments: YARN-6251.001.patch > > > Opening to track a locking issue that was uncovered when running a custom SLS > AMSimulator. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations
[ https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6473: --- Attachment: YARN-6473.v1.patch > Create ReservationInvariantChecker to validate ReservationSystem + Scheduler > operations > --- > > Key: YARN-6473 > URL: https://issues.apache.org/jira/browse/YARN-6473 > Project: Hadoop YARN > Issue Type: Bug >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6473.v0.patch, YARN-6473.v1.patch > > > This JIRA tracks an application of YARN-6451 ideas to the ReservationSystem. > It is in particularly useful to create integration tests, or for test > clusters, where we can continuously (and possibly costly) check the > ReservationSystem + Scheduler are operating as expected. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6451) Add RM monitor validating metrics invariants
[ https://issues.apache.org/jira/browse/YARN-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993981#comment-15993981 ] Wangda Tan commented on YARN-6451: -- Thanks [~curino]/[~chris.douglas], Beyond metrics, i think there're many information are not inside metrics, such as order of container allocation to ensure FIFO/fairness, etc. Have you thought about how to formalize these requirements? > Add RM monitor validating metrics invariants > > > Key: YARN-6451 > URL: https://issues.apache.org/jira/browse/YARN-6451 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Carlo Curino >Assignee: Carlo Curino > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6451.v0.patch, YARN-6451.v1.patch, > YARN-6451.v2.patch, YARN-6451.v3.patch, YARN-6451.v4.patch, YARN-6451.v5.patch > > > For SLS runs, as well as for live test clusters (and maybe prod), it would be > useful to have a mechanism to continuously check whether core invariants of > the RM/Scheduler are respected (e.g., no priority inversions, fairness mostly > respected, certain latencies within expected range, etc..) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6418) Reduce Resource object creation and overhead in Capacity scheduler inner loop
[ https://issues.apache.org/jira/browse/YARN-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated YARN-6418: --- Affects Version/s: (was: 2.8) > Reduce Resource object creation and overhead in Capacity scheduler inner loop > - > > Key: YARN-6418 > URL: https://issues.apache.org/jira/browse/YARN-6418 > Project: Hadoop YARN > Issue Type: Improvement > Components: capacityscheduler, resourcemanager >Affects Versions: 2.7.3, 3.0.0-alpha2 >Reporter: Roni Burd >Assignee: Roni Burd > > Resource object is used multiple due to ResourceCalculator creates new > instances on every method call. This gets called several times on each node > HB. Resource is a very expensive object that relies on Protobufs > The change is to remove the need to use protobuf on the Resource object and > avoid creating many objects in the Resource Calculator all the time -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6498) The SLS offline mode doesn't work
[ https://issues.apache.org/jira/browse/YARN-6498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated YARN-6498: --- Affects Version/s: (was: 2.8) > The SLS offline mode doesn't work > - > > Key: YARN-6498 > URL: https://issues.apache.org/jira/browse/YARN-6498 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6451) Add RM monitor validating metrics invariants
[ https://issues.apache.org/jira/browse/YARN-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994014#comment-15994014 ] Carlo Curino commented on YARN-6451: I see two or three alternatives: # Hard-coding the most important invariants in a programmatic way, you see an example of this in: YARN-6473, where I poke the {{ReservationSystem}} and {{YarnScheduler}} to check whether their data-structures remain in sync during execution. This is more minimalistic/efficient, but any extension requires code changes. For example, you can maintain an observer of container allocations, and check that certain ordering properties are respected. # Expand the mechanics of YARN-6451 by adding "bindings" for many more parts of the RM internal state, which one is allowed to mentioned in the {{invariants.txt}} file. Metrics was a natural starting point, as the cost of gathering is already there, and their names are externally known. To minimize the cost, we could load the {{invariants.txt}} expressions, and then limit the "state" we probe to be the least one covering the needs of our expressions. # Leverage compiler APIs / aspects / dependency-injection type of tricks to dynamically modify the code that does the binding work, to cover whatever appears in {{invariants.txt}} file. This is obviously the richest one, though it has some maintainability issues. In YARN-6547 I propose a simple way of combining YARN-6363 and YARN-6451 capabilities to run tests that check an SLS run for common invariants (both during and at the end of the run). That is mostly a mechanism patch, but we can work together to define very tight yet robust invariants for specific runs. > Add RM monitor validating metrics invariants > > > Key: YARN-6451 > URL: https://issues.apache.org/jira/browse/YARN-6451 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Carlo Curino >Assignee: Carlo Curino > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6451.v0.patch, YARN-6451.v1.patch, > YARN-6451.v2.patch, YARN-6451.v3.patch, YARN-6451.v4.patch, YARN-6451.v5.patch > > > For SLS runs, as well as for live test clusters (and maybe prod), it would be > useful to have a mechanism to continuously check whether core invariants of > the RM/Scheduler are respected (e.g., no priority inversions, fairness mostly > respected, certain latencies within expected range, etc..) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6451) Add RM monitor validating metrics invariants
[ https://issues.apache.org/jira/browse/YARN-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994014#comment-15994014 ] Carlo Curino edited comment on YARN-6451 at 5/2/17 11:31 PM: - I see two or three alternatives: # Hard-coding the most important invariants in a programmatic way, you see an example of this in: YARN-6473, where I poke the {{ReservationSystem}} and {{YarnScheduler}} to check whether their data-structures remain in sync during execution. This is more minimalistic/efficient, but any extension requires code changes. For example, you can maintain an observer of container allocations, and check that certain ordering properties are respected. # Expand the mechanics of YARN-6451 by adding "bindings" for many more parts of the RM internal state, which one is allowed to mentioned in the {{invariants.txt}} file. Metrics was a natural starting point, as the cost of gathering is already there, and their names are externally known. To minimize the cost, we could load the {{invariants.txt}} expressions, and then limit the "state" we probe to be the least one covering the needs of our expressions. # (discussing with [~chris.douglas] another option emerged) Leverage compiler APIs / aspects / dependency-injection type of tricks to dynamically modify the code that does the binding work, to cover whatever appears in {{invariants.txt}} file. This is obviously the richest one, though it has some maintainability issues. In YARN-6547 I propose a simple way of combining YARN-6363 and YARN-6451 capabilities to run tests that check an SLS run for common invariants (both during and at the end of the run). That is mostly a mechanism patch, but we can work together to define very tight yet robust invariants for specific runs. was (Author: curino): I see two or three alternatives: # Hard-coding the most important invariants in a programmatic way, you see an example of this in: YARN-6473, where I poke the {{ReservationSystem}} and {{YarnScheduler}} to check whether their data-structures remain in sync during execution. This is more minimalistic/efficient, but any extension requires code changes. For example, you can maintain an observer of container allocations, and check that certain ordering properties are respected. # Expand the mechanics of YARN-6451 by adding "bindings" for many more parts of the RM internal state, which one is allowed to mentioned in the {{invariants.txt}} file. Metrics was a natural starting point, as the cost of gathering is already there, and their names are externally known. To minimize the cost, we could load the {{invariants.txt}} expressions, and then limit the "state" we probe to be the least one covering the needs of our expressions. # Leverage compiler APIs / aspects / dependency-injection type of tricks to dynamically modify the code that does the binding work, to cover whatever appears in {{invariants.txt}} file. This is obviously the richest one, though it has some maintainability issues. In YARN-6547 I propose a simple way of combining YARN-6363 and YARN-6451 capabilities to run tests that check an SLS run for common invariants (both during and at the end of the run). That is mostly a mechanism patch, but we can work together to define very tight yet robust invariants for specific runs. > Add RM monitor validating metrics invariants > > > Key: YARN-6451 > URL: https://issues.apache.org/jira/browse/YARN-6451 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Carlo Curino >Assignee: Carlo Curino > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6451.v0.patch, YARN-6451.v1.patch, > YARN-6451.v2.patch, YARN-6451.v3.patch, YARN-6451.v4.patch, YARN-6451.v5.patch > > > For SLS runs, as well as for live test clusters (and maybe prod), it would be > useful to have a mechanism to continuously check whether core invariants of > the RM/Scheduler are respected (e.g., no priority inversions, fairness mostly > respected, certain latencies within expected range, etc..) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6548) Adding reservation metrics in QueueMetrics
Carlo Curino created YARN-6548: -- Summary: Adding reservation metrics in QueueMetrics Key: YARN-6548 URL: https://issues.apache.org/jira/browse/YARN-6548 Project: Hadoop YARN Issue Type: Sub-task Reporter: Carlo Curino Assignee: Carlo Curino This JIRA tracks an effort to extend the QueueMetrics to include relevant metrics for the ReservationSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6549) Print enough container-executor logs for troubleshooting container launch failures
Wangda Tan created YARN-6549: Summary: Print enough container-executor logs for troubleshooting container launch failures Key: YARN-6549 URL: https://issues.apache.org/jira/browse/YARN-6549 Project: Hadoop YARN Issue Type: Sub-task Reporter: Wangda Tan Now container-executor doesn't print enough logs for troubleshooting. We need to fix that. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6549) Print enough container-executor logs for troubleshooting container launch failures
[ https://issues.apache.org/jira/browse/YARN-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994021#comment-15994021 ] Wangda Tan commented on YARN-6549: -- + [~sidharta-s], who looked at the issue before. > Print enough container-executor logs for troubleshooting container launch > failures > -- > > Key: YARN-6549 > URL: https://issues.apache.org/jira/browse/YARN-6549 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan > > Now container-executor doesn't print enough logs for troubleshooting. We need > to fix that. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6550) Capture launch_container.sh logs
Wangda Tan created YARN-6550: Summary: Capture launch_container.sh logs Key: YARN-6550 URL: https://issues.apache.org/jira/browse/YARN-6550 Project: Hadoop YARN Issue Type: Sub-task Reporter: Wangda Tan launch_container.sh which generated by NM will do a bunch of things (like create link, etc.) while launch a process. No logs captured until {{exec}} is called. We need capture all failures of launch_container.sh for easier troubleshooting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6550) Capture launch_container.sh logs
[ https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994033#comment-15994033 ] Wangda Tan commented on YARN-6550: -- +[~vinodkv]/[~sidharta-s]/[~vvasudev]. > Capture launch_container.sh logs > > > Key: YARN-6550 > URL: https://issues.apache.org/jira/browse/YARN-6550 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Wangda Tan > > launch_container.sh which generated by NM will do a bunch of things (like > create link, etc.) while launch a process. No logs captured until {{exec}} is > called. We need capture all failures of launch_container.sh for easier > troubleshooting. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6451) Add RM monitor validating metrics invariants
[ https://issues.apache.org/jira/browse/YARN-6451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994042#comment-15994042 ] Wangda Tan commented on YARN-6451: -- Thanks [~curino] for your responses. I personally think #3 is the good way to go, I agree the approach to get low-hanging fruit first via existing metrics-based mechanisms. > Add RM monitor validating metrics invariants > > > Key: YARN-6451 > URL: https://issues.apache.org/jira/browse/YARN-6451 > Project: Hadoop YARN > Issue Type: New Feature >Reporter: Carlo Curino >Assignee: Carlo Curino > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6451.v0.patch, YARN-6451.v1.patch, > YARN-6451.v2.patch, YARN-6451.v3.patch, YARN-6451.v4.patch, YARN-6451.v5.patch > > > For SLS runs, as well as for live test clusters (and maybe prod), it would be > useful to have a mechanism to continuously check whether core invariants of > the RM/Scheduler are respected (e.g., no priority inversions, fairness mostly > respected, certain latencies within expected range, etc..) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router
[ https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Giovanni Matteo Fumarola updated YARN-5411: --- Attachment: YARN-5411-YARN-2915.v4.patch > Create a proxy chain for ApplicationClientProtocol in the Router > > > Key: YARN-5411 > URL: https://issues.apache.org/jira/browse/YARN-5411 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-5411-YARN-2915.v1.patch, > YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, > YARN-5411-YARN-2915.v4.patch > > > As detailed in the proposal in the umbrella JIRA, we are introducing a new > component that routes client request to appropriate ResourceManager(s). This > JIRA tracks the creation of a proxy for ApplicationClientProtocol in the > Router. This provides a placeholder for: > 1) throttling mis-behaving clients (YARN-1546) > 3) mask the access to multiple RMs (YARN-3659) > We are planning to follow the interceptor pattern like we did in YARN-2884 to > generalize the approach and have only dynamically coupling for Federation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6548) Adding reservation metrics in QueueMetrics
[ https://issues.apache.org/jira/browse/YARN-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994050#comment-15994050 ] Carlo Curino commented on YARN-6548: In this patch, we add metrics to cover the {{ReservationSystem}} and add tests based on YARN-6547. The patch will have issue applying, marking as patch-available to signal ready-to-review. > Adding reservation metrics in QueueMetrics > -- > > Key: YARN-6548 > URL: https://issues.apache.org/jira/browse/YARN-6548 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6548.v0.patch > > > This JIRA tracks an effort to extend the QueueMetrics to include relevant > metrics for the ReservationSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-6548) Adding reservation metrics in QueueMetrics
[ https://issues.apache.org/jira/browse/YARN-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carlo Curino updated YARN-6548: --- Attachment: YARN-6548.v0.patch > Adding reservation metrics in QueueMetrics > -- > > Key: YARN-6548 > URL: https://issues.apache.org/jira/browse/YARN-6548 > Project: Hadoop YARN > Issue Type: Sub-task > Components: resourcemanager >Reporter: Carlo Curino >Assignee: Carlo Curino > Attachments: YARN-6548.v0.patch > > > This JIRA tracks an effort to extend the QueueMetrics to include relevant > metrics for the ReservationSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5411) Create a proxy chain for ApplicationClientProtocol in the Router
[ https://issues.apache.org/jira/browse/YARN-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994058#comment-15994058 ] Wangda Tan commented on YARN-5411: -- Thanks [~giovanni.fumarola]. Took a brief look at the patch and discussed with [~subru] offline. The general approach looks fine. > Create a proxy chain for ApplicationClientProtocol in the Router > > > Key: YARN-5411 > URL: https://issues.apache.org/jira/browse/YARN-5411 > Project: Hadoop YARN > Issue Type: Sub-task > Components: nodemanager, resourcemanager >Reporter: Subru Krishnan >Assignee: Giovanni Matteo Fumarola > Attachments: YARN-5411-YARN-2915.v1.patch, > YARN-5411-YARN-2915.v2.patch, YARN-5411-YARN-2915.v3.patch, > YARN-5411-YARN-2915.v4.patch > > > As detailed in the proposal in the umbrella JIRA, we are introducing a new > component that routes client request to appropriate ResourceManager(s). This > JIRA tracks the creation of a proxy for ApplicationClientProtocol in the > Router. This provides a placeholder for: > 1) throttling mis-behaving clients (YARN-1546) > 3) mask the access to multiple RMs (YARN-3659) > We are planning to follow the interceptor pattern like we did in YARN-2884 to > generalize the approach and have only dynamically coupling for Federation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6469) Extending Synthetic Load Generator and SLS for recurring reservation
[ https://issues.apache.org/jira/browse/YARN-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994073#comment-15994073 ] Hadoop QA commented on YARN-6469: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 39s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 11s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 58s{color} | {color:orange} root: The patch generated 10 new + 44 unchanged - 1 fixed = 54 total (was 45) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 46s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 45s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}100m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.sls.TestSLSRunner | | | hadoop.yarn.sls.nodemanager.TestNMSimulator | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6469 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12866069/YARN-6469.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ad677ca495fe 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cedaf4c | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15803/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoo
[jira] [Updated] (YARN-6522) Make SLS JSON input file format simple and scalable
[ https://issues.apache.org/jira/browse/YARN-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yufei Gu updated YARN-6522: --- Attachment: YARN-6522.002.patch > Make SLS JSON input file format simple and scalable > --- > > Key: YARN-6522 > URL: https://issues.apache.org/jira/browse/YARN-6522 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6522.001.patch, YARN-6522.002.patch > > > SLS input format is verbose, and it doesn't scale out. We can improve it in > these ways: > # We need to configure tasks one by one if there are more than one task in a > job, which means the job configuration usually includes lots of redundant > items. To specify the number of task for task configuration will solve this > issue. > # Container host is useful for locality testing. It is obnoxious to specify > container host for each task for tests unrelated to locality. We would like > to make it optional. > # For most tests, we don't care about job.id. Make it optional and generated > automatically by default. > # job.finish.ms doesn't make sense, just remove it. > # container type and container priority should be optional as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-5949) Add pluggable configuration policy interface as a component of MutableCSConfigurationProvider
[ https://issues.apache.org/jira/browse/YARN-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994089#comment-15994089 ] Wangda Tan commented on YARN-5949: -- Thanks [~jhung], bq. This makes sense, should just be able to either grab the second-to-last component of the key, or the component before "accessible-node-labels"/"ordering-policy". I think we can also search for "root", if it is not there then assume it is a global config change. I think we can handle it in this way: There're a set of known queue paths like \{"root.queueA", "root.queueA.A1\}. For a given config key to change, first we need to remove the common prefix ("yarn.scheduler.capacity."), do the longest prefix match in the known queue paths. - If we can find any non-empty common prefix, check the queue's accessibilities. - If we cannot find, this is a global config, check admin permission. This approach doesn't need to handle special options like "accessible-node-labels", and don't need to use "root" to index starting of queue path, to me it is not a safe approach. bq. As long as we access these queues via YarnScheduler#getQueueInfo, is this API still necessary? When the scheduler is reinitialized and the next mutation comes in, it will check against the queues from the most recent reinitialization. We may have to call getQueueInfo for everytime when config mutation request comes in, the frequency of mutation request should not super high, I think the stateless approach should be fine. bq. This is not implemented yet, but I was thinking of handling this in RMWebServices, there are some cases that have not been handled (e.g. updating config for a queue which doesn't exist shouldn't be allowed, right now it "succeeds" silently). So we can address these cases in a separate jira. Agree, it will add a never used option, it doesn't sound like a critical issue, we can handle it in a separate JIRA. bq. Yes you're right, this is not handled yet, in fact there is still some handling we need to do in RMWebServices for global configs, we can address this in a separate jira as well. If non-trivial effort need to take for this, I'm OK to move it to a separate JIRA. This is quite important to me. In fact, I think we should not assume any scheduler-specific configurations inside RMWebServices (like add special logics to handle "yarn.scheduler.capacity."). Thoughts? > Add pluggable configuration policy interface as a component of > MutableCSConfigurationProvider > - > > Key: YARN-5949 > URL: https://issues.apache.org/jira/browse/YARN-5949 > Project: Hadoop YARN > Issue Type: Sub-task >Reporter: Jonathan Hung >Assignee: Jonathan Hung > Attachments: YARN-5949-YARN-5734.001.patch, > YARN-5949-YARN-5734.002.patch > > > This will allow different policies to customize how/if configuration changes > should be applied (for example, a policy might restrict whether a > configuration change by a certain user is allowed). This will be enforced by > the MutableCSConfigurationProvider. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6522) Make SLS JSON input file format simple and scalable
[ https://issues.apache.org/jira/browse/YARN-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994097#comment-15994097 ] Yufei Gu edited comment on YARN-6522 at 5/3/17 12:28 AM: - Thanks [~rkanter] for the review. Uploaded patch v2 which does followups: 1. Add static constant for mapper/reduce priority. 2. Add back {{job.finish.ms}} 3. Change to {{num.nodes}} and {{num.racks}}, add unit tests. The function {{generateNodes}} is used by SYN input format originally and flaw. Fix the issue in it. 4. Add documentation 5. {{jsonJob.get("job.id").toString()}} throws a NPE if there is no {{job.id}}. was (Author: yufeigu): Thanks [~rkanter] for the review. Uploaded patch v2. 1. Add static constant for mapper/reduce priority. 2. Add back {{job.finish.ms}} 3. Change to {{num.nodes}} and {{num.racks}}, add unit tests. The function {{generateNodes}} is used by SYN input format originally and flaw. Fix the issue in it. 4. Add documentation 5. {{jsonJob.get("job.id").toString()}} throws a NPE if there is no {{job.id}}. > Make SLS JSON input file format simple and scalable > --- > > Key: YARN-6522 > URL: https://issues.apache.org/jira/browse/YARN-6522 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6522.001.patch, YARN-6522.002.patch > > > SLS input format is verbose, and it doesn't scale out. We can improve it in > these ways: > # We need to configure tasks one by one if there are more than one task in a > job, which means the job configuration usually includes lots of redundant > items. To specify the number of task for task configuration will solve this > issue. > # Container host is useful for locality testing. It is obnoxious to specify > container host for each task for tests unrelated to locality. We would like > to make it optional. > # For most tests, we don't care about job.id. Make it optional and generated > automatically by default. > # job.finish.ms doesn't make sense, just remove it. > # container type and container priority should be optional as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6522) Make SLS JSON input file format simple and scalable
[ https://issues.apache.org/jira/browse/YARN-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994097#comment-15994097 ] Yufei Gu commented on YARN-6522: Thanks [~rkanter] for the review. Uploaded patch v2. 1. Add static constant for mapper/reduce priority. 2. Add back {{job.finish.ms}} 3. Change to {{num.nodes}} and {{num.racks}}, add unit tests. The function {{generateNodes}} is used by SYN input format originally and flaw. Fix the issue in it. 4. Add documentation 5. {{jsonJob.get("job.id").toString()}} throws a NPE if there is no {{job.id}}. > Make SLS JSON input file format simple and scalable > --- > > Key: YARN-6522 > URL: https://issues.apache.org/jira/browse/YARN-6522 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6522.001.patch, YARN-6522.002.patch > > > SLS input format is verbose, and it doesn't scale out. We can improve it in > these ways: > # We need to configure tasks one by one if there are more than one task in a > job, which means the job configuration usually includes lots of redundant > items. To specify the number of task for task configuration will solve this > issue. > # Container host is useful for locality testing. It is obnoxious to specify > container host for each task for tests unrelated to locality. We would like > to make it optional. > # For most tests, we don't care about job.id. Make it optional and generated > automatically by default. > # job.finish.ms doesn't make sense, just remove it. > # container type and container priority should be optional as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-6551) Validate SLS input
Yufei Gu created YARN-6551: -- Summary: Validate SLS input Key: YARN-6551 URL: https://issues.apache.org/jira/browse/YARN-6551 Project: Hadoop YARN Issue Type: Bug Components: scheduler-load-simulator Reporter: Yufei Gu SLS takes three different input formats, SLS, RUMEN, and SYN. Some values needs to be validated, e.g. node number cannot be negative. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (YARN-6522) Make SLS JSON input file format simple and scalable
[ https://issues.apache.org/jira/browse/YARN-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994097#comment-15994097 ] Yufei Gu edited comment on YARN-6522 at 5/3/17 12:53 AM: - Thanks [~rkanter] for the review. Uploaded patch v2 which does followups: 1. Add static constant for mapper/reduce priority. 2. Add back {{job.finish.ms}} 3. Change to {{num.nodes}} and {{num.racks}}, add unit tests. The function {{generateNodes}} is used by SYN input format originally and flaw. Fix the issue in it. Add some some validation in patch v2 and filed YARN-6511 for other validations. 4. Add documentation 5. {{jsonJob.get("job.id").toString()}} throws a NPE if there is no {{job.id}}. was (Author: yufeigu): Thanks [~rkanter] for the review. Uploaded patch v2 which does followups: 1. Add static constant for mapper/reduce priority. 2. Add back {{job.finish.ms}} 3. Change to {{num.nodes}} and {{num.racks}}, add unit tests. The function {{generateNodes}} is used by SYN input format originally and flaw. Fix the issue in it. 4. Add documentation 5. {{jsonJob.get("job.id").toString()}} throws a NPE if there is no {{job.id}}. > Make SLS JSON input file format simple and scalable > --- > > Key: YARN-6522 > URL: https://issues.apache.org/jira/browse/YARN-6522 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Affects Versions: 3.0.0-alpha2 >Reporter: Yufei Gu >Assignee: Yufei Gu > Attachments: YARN-6522.001.patch, YARN-6522.002.patch > > > SLS input format is verbose, and it doesn't scale out. We can improve it in > these ways: > # We need to configure tasks one by one if there are more than one task in a > job, which means the job configuration usually includes lots of redundant > items. To specify the number of task for task configuration will solve this > issue. > # Container host is useful for locality testing. It is obnoxious to specify > container host for each task for tests unrelated to locality. We would like > to make it optional. > # For most tests, we don't care about job.id. Make it optional and generated > automatically by default. > # job.finish.ms doesn't make sense, just remove it. > # container type and container priority should be optional as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-6522) Make SLS JSON input file format simple and scalable
[ https://issues.apache.org/jira/browse/YARN-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994123#comment-15994123 ] Hadoop QA commented on YARN-6522: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 11s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 4 new + 46 unchanged - 1 fixed = 50 total (was 47) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 4m 30s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 19s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.sls.TestSLSRunner | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6522 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12866096/YARN-6522.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f0cb7dc392fb 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cedaf4c | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15807/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-sls-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15807/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/15807/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-sls.txt | | Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/15807/testReport/ | | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls | | Console output | https://builds.apache.org/job/PreCommit-YARN-Build/15807/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Make SLS JSON input fil
[jira] [Commented] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations
[ https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994145#comment-15994145 ] Hadoop QA commented on YARN-6473: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 41s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 13s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 29s{color} | {color:orange} root: The patch generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 9s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 24s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 52s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | YARN-6473 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12866080/YARN-6473.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 43b240efec40 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / cedaf4c | | Default Java | 1.8.0_121 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-YARN-Build/15804/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-sls-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-YARN-Build/15804/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-YARN-Build/
[jira] [Commented] (YARN-6548) Adding reservation metrics in QueueMetrics
[ https://issues.apache.org/jira/browse/YARN-6548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994183#comment-15994183 ] Hadoop QA commented on YARN-6548: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 38s{color} | {color:red} hadoop-tools/hadoop-sls in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 13m 47s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 47s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 57s{color} | {color:orange} root: The patch generated 11 new + 82 unchanged - 0 fixed = 93 total (was 82) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 21s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 27s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 3s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 29s{color} | {color:red} hadoop-sls in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 42s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}126m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | org.apache.hadoop.yarn.server.resourcemanager.reservation.InMemoryPlan.deleteReservation(ReservationId) does not release lock on all exception paths At InMemoryPlan.java:on all exception paths At InMemoryPlan.java:[line 363] | | | org.apache.hadoop.yarn.server.resourcemanager.reservation.InMemoryPlan.updateReservation(ReservationAllocation) does not release lock on all exception paths At InMemoryPlan.java:on all exception paths At InMemoryPlan.java:[line 284] | | Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart | | | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/
[jira] [Commented] (YARN-6519) Fix warnings from Spotbugs in hadoop-yarn-server-resourcemanager
[ https://issues.apache.org/jira/browse/YARN-6519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15994220#comment-15994220 ] Weiwei Yang commented on YARN-6519: --- Thanks [~Naganarasimha] for all the way help :). > Fix warnings from Spotbugs in hadoop-yarn-server-resourcemanager > > > Key: YARN-6519 > URL: https://issues.apache.org/jira/browse/YARN-6519 > Project: Hadoop YARN > Issue Type: Bug > Components: resourcemanager >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: findbugs > Fix For: 2.9.0, 3.0.0-alpha3 > > Attachments: YARN-6519.001.patch, YARN-6519.002.patch, > YARN-6519-branch-2.001.patch > > > There is 8 findbugs warning in hadoop-yarn-server-timelineservice since > switched to spotbugs > # > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager$1.compare(CSQueue, > CSQueue) incorrectly handles float value > # org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType.index > field is public and mutable > # > org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.EMPTY_CONTAINER_LIST > is a mutable collection which should be package protected > # > org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.EMPTY_CONTAINER_LIST > is a mutable collection which should be package protected > # > org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics.queueMetrics > is a mutable collection > # > org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.cleanupStaledPreemptionCandidates(long) > makes inefficient use of keySet iterator instead of entrySet iterator > # > org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.transferStateFromAttempt(RMAppAttempt) > makes inefficient use of keySet iterator instead of entrySet iterator > # > org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.cleanupPreemptionList() > makes inefficient use of keySet iterator instead of entrySet iterator > See more from > [https://builds.apache.org/job/PreCommit-HADOOP-Build/12157/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html] -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org