For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/
[Apr 8, 2019 1:02:34 AM] (aajisaka) HADOOP-10848. Cleanup calling of sun.security.krb5.Config. -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-common-project/hadoop-kms Null passed for non-null parameter of com.google.common.base.Strings.isNullOrEmpty(String) in org.apache.hadoop.crypto.key.kms.server.KMSAudit.op(KMSAuditLogger$OpStatus, Object, UserGroupInformation, String, String, String) Method invoked at KMSAudit.java:of com.google.common.base.Strings.isNullOrEmpty(String) in org.apache.hadoop.crypto.key.kms.server.KMSAudit.op(KMSAuditLogger$OpStatus, Object, UserGroupInformation, String, String, String) Method invoked at KMSAudit.java:[line 195] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Null passed for non-null parameter of com.google.common.base.Strings.emptyToNull(String) in org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport() Method invoked at NodeHealthCheckerService.java:of com.google.common.base.Strings.emptyToNull(String) in org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport() Method invoked at NodeHealthCheckerService.java:[line 66] Null passed for non-null parameter of com.google.common.base.Strings.emptyToNull(String) in org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport() Method invoked at NodeHealthCheckerService.java:of com.google.common.base.Strings.emptyToNull(String) in org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport() Method invoked at NodeHealthCheckerService.java:[line 72] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager Null passed for non-null parameter of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore, RMStateStoreEvent) At RMStateStore.java:of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppTransition.transition(RMStateStore, RMStateStoreEvent) At RMStateStore.java:[line 291] Null passed for non-null parameter of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateApplicationPriority(Priority, ApplicationId, SettableFuture, UserGroupInformation) At CapacityScheduler.java:of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.updateApplicationPriority(Priority, ApplicationId, SettableFuture, UserGroupInformation) At CapacityScheduler.java:[line 2647] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setEvents(Map) makes inefficient use of keySet iterator instead of entrySet iterator At TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At TimelineEntityDocument.java:[line 159] org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setMetrics(Map) makes inefficient use of keySet iterator instead of entrySet iterator At TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At TimelineEntityDocument.java:[line 142] Unread field:TimelineEventSubDoc.java:[line 56] Unread field:TimelineMetricSubDoc.java:[line 44] Switch statement found in org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregate(TimelineMetric, TimelineMetric) where default case is missing At FlowRunDocument.java:TimelineMetric) where default case is missing At FlowRunDocument.java:[lines 121-136] org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregateMetrics(Map) makes inefficient use of keySet iterator instead of entrySet iterator At FlowRunDocument.java:keySet iterator instead of entrySet iterator At FlowRunDocument.java:[line 103] Possible doublecheck on org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader.client in new org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration) At CosmosDBDocumentStoreReader.java:new org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration) At CosmosDBDocumentStoreReader.java:[lines 73-75] Possible doublecheck on org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter.client in new org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration) At CosmosDBDocumentStoreWriter.java:new org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration) At CosmosDBDocumentStoreWriter.java:[lines 66-68] Failed junit tests : hadoop.util.TestDiskCheckerWithDiskIo hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapreduce.v2.app.TestRuntimeEstimators hadoop.ozone.TestMiniChaosOzoneCluster hadoop.fs.ozone.contract.ITestOzoneContractMkdir hadoop.fs.ozone.contract.ITestOzoneContractCreate cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/diff-compile-javac-root.txt [336K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/diff-patch-pylint.txt [84K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/whitespace-eol.txt [9.6M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/whitespace-tabs.txt [1.1M] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-yarnservice-runtime.txt [4.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/diff-javadoc-javadoc-root.txt [752K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [172K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [332K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [84K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [84K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-hdds_container-service.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-ozone_integration-test.txt [36K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-ozone_ozonefs.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-unit-hadoop-submarine_hadoop-submarine-yarnservice-runtime.txt [4.0K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1100/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.8.0 http://yetus.apache.org
--------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org