Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/ [Jul 25, 2019 2:54:46 PM] (31469764+bshashikant) HDDS-1749 : Ozone Client should randomize the list of nodes in pipeline [Jul 25, 2019 3:51:11 PM] (github) HDDS-1842. Implement S3 Abort MPU request to use Cache and DoubleBuffer. [Jul 25, 2019 4:52:02 PM] (xyao) HDDS-1858. mTLS support for Ozone is not correct. Contributed by [Jul 25, 2019 9:25:39 PM] (aengineer) HDDS-1850. ReplicationManager should consider inflight replication and [Jul 25, 2019 11:14:50 PM] (arp7) HDDS-1830 OzoneManagerDoubleBuffer#stop should wait for daemon thread to [Jul 26, 2019 12:35:23 AM] (iwasakims) HDFS-14135. TestWebHdfsTimeouts Fails intermittently in trunk. -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] FindBugs : module:hadoop-tools/hadoop-aws Inconsistent synchronization of org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider; locked 75% of time Unsynchronized access at LocalMetadataStore.java:75% of time Unsynchronized access at LocalMetadataStore.java:[line 623] Failed junit tests : hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/diff-patch-pylint.txt [216K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/whitespace-eol.txt [9.6M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/xml.txt [16K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1209/artifact/out
[jira] [Resolved] (MAPREDUCE-6973) Fix comments on creating _SUCCESS file.
[ https://issues.apache.org/jira/browse/MAPREDUCE-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved MAPREDUCE-6973. -- Resolution: Fixed Committed this to trunk. Thanks [~mehulgarnara]. > Fix comments on creating _SUCCESS file. > --- > > Key: MAPREDUCE-6973 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6973 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0-beta1 >Reporter: Mehul Garnara (MG) >Assignee: Mehul Garnara (MG) >Priority: Trivial > Labels: easyfix, newbie > Fix For: 3.3.0 > > Original Estimate: 10m > Remaining Estimate: 10m > > I went through couple of old JIRA issues and understood that earlier app was > creating "_done" file on job has completed successfully. After some > conversation by group decided to create "_SUCCESS" instead of "_done". > However, while learning the code, I found there is one comment has reference > of "_done" and would like to start with small contribution to fix it. > Note: I would like to work on this trivial issue so can get opportunity to > follow standard process of contribution steps, that will myself to come on > track quickly for future contribution that I would like to do. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
Re: [DISCUSS] Prefer JUnit5 for new tests
this is a silly question, but what is a "Junit 5 test"? We've been slowly adopting the assertJ APIs in new tests in hadoop-aws, and they work file in the older codebase, so even for existing tests we can advocate them. they're very good for making assertions about collections; very verbose for classic assertTrue/assertFalse, but can be used to generate great error strings On Fri, Jul 26, 2019 at 9:26 AM Akira Ajisaka wrote: > Hi folks, > > Now we are slowly migrating from JUnit4 to JUnit5. > https://issues.apache.org/jira/browse/HADOOP-14693 > > However, as Steve commented [1], if we are going to migrate the > existing tests, the backporting cost will become too expensive. > Therefore, I'd like to recommend using JUnit5 for new tests before > migrating the existing tests. Using junit-vintage-engine, we can mix > JUnit4 and JUnit5 APIs in the same module, so writing new tests in > JUnit5 is relatively easy. > > Any thoughts? > > [1] > https://issues.apache.org/jira/browse/HADOOP-16318?focusedCommentId=16890955&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16890955 > > -Akira > > - > To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org > >
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/ [Jul 26, 2019 12:54:27 AM] (iwasakims) HDFS-14135. TestWebHdfsTimeouts Fails intermittently in trunk. -1 overall The following subsystems voted -1: asflicense compile findbugs hadolint mvninstall mvnsite pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.ha.TestZKFailoverController hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.yarn.client.api.impl.TestAMRMClient mvninstall: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/patch-mvninstall-root.txt [332K] compile: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/patch-compile-root-jdk1.7.0_95.txt [192K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/patch-compile-root-jdk1.7.0_95.txt [192K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/patch-compile-root-jdk1.7.0_95.txt [192K] compile: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/patch-compile-root-jdk1.8.0_212.txt [168K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/patch-compile-root-jdk1.8.0_212.txt [168K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/patch-compile-root-jdk1.8.0_212.txt [168K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/diff-patch-hadolint.txt [4.0K] mvnsite: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/patch-mvnsite-root.txt [256K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/whitespace-tabs.txt [1.2M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/394/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt [1.1M] unit: https://bu
Re: [DISCUSS] Prefer JUnit5 for new tests
+1 if it is indeed possible to have both. Thanks +Vinod > On Jul 26, 2019, at 1:56 PM, Akira Ajisaka wrote: > > Hi folks, > > Now we are slowly migrating from JUnit4 to JUnit5. > https://issues.apache.org/jira/browse/HADOOP-14693 > > However, as Steve commented [1], if we are going to migrate the > existing tests, the backporting cost will become too expensive. > Therefore, I'd like to recommend using JUnit5 for new tests before > migrating the existing tests. Using junit-vintage-engine, we can mix > JUnit4 and JUnit5 APIs in the same module, so writing new tests in > JUnit5 is relatively easy. > > Any thoughts? > > [1] > https://issues.apache.org/jira/browse/HADOOP-16318?focusedCommentId=16890955&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16890955 > > -Akira > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
[DISCUSS] Prefer JUnit5 for new tests
Hi folks, Now we are slowly migrating from JUnit4 to JUnit5. https://issues.apache.org/jira/browse/HADOOP-14693 However, as Steve commented [1], if we are going to migrate the existing tests, the backporting cost will become too expensive. Therefore, I'd like to recommend using JUnit5 for new tests before migrating the existing tests. Using junit-vintage-engine, we can mix JUnit4 and JUnit5 APIs in the same module, so writing new tests in JUnit5 is relatively easy. Any thoughts? [1] https://issues.apache.org/jira/browse/HADOOP-16318?focusedCommentId=16890955&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16890955 -Akira - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org