Re: [VOTE] Release Apache Hadoop Submarine 0.2.0 - RC0
Hi folks, Thanks for helping to vote this submarine 0.2.0 release! With 4 binding votes, 9 non-binding votes and no veto, this voting process passed. I'm going to work on the next step. Thanks! Thanks, Zhankun On Fri, 21 Jun 2019 at 16:58, Rohith Sharma K S wrote: > +1(binding) > Verified basics installing a cluster. > > -rohith Sharma k S > > On Thu, 6 Jun 2019 at 18:53, Zhankun Tang wrote: > >> Hi folks, >> >> Thanks to all of you who have contributed in this submarine 0.2.0 release. >> We now have a release candidate (RC0) for Apache Hadoop Submarine 0.2.0. >> >> >> The Artifacts for this Submarine-0.2.0 RC0 are available here: >> >> https://home.apache.org/~ztang/submarine-0.2.0-rc0/ >> >> >> It's RC tag in git is "submarine-0.2.0-RC0". >> >> >> >> The maven artifacts are available via repository.apache.org at >> https://repository.apache.org/content/repositories/orgapachehadoop-1221/ >> >> >> This vote will run 7 days (5 weekdays), ending on 13th June at 11:59 pm >> PST. >> >> >> >> The highlights of this release. >> >> 1. Linkedin's TonY runtime support in Submarine >> >> 2. PyTorch enabled in Submarine with both YARN native service runtime >> (single node) and TonY runtime >> >> 3. Support uber jar of Submarine to submit the job >> >> 4. The YAML file to describe a job >> >> 5. The Notebook support (by Apache Zeppelin Submarine interpreter) >> >> >> Thanks to Sunil, Wangda, Xun, Zac, Keqiu, Szilard for helping me in >> preparing the release. >> >> I have done a few testing with my pseudo cluster. My +1 (non-binding) to >> start. >> >> >> >> Regards, >> Zhankun >> >
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/ [Jun 20, 2019 8:42:27 AM] (stevel) HADOOP-16379: S3AInputStream.unbuffer should merge input stream stats [Jun 20, 2019 8:56:40 AM] (stevel) HADOOP-15183. S3Guard store becomes inconsistent after partial failure [Jun 20, 2019 2:36:42 PM] (elek) HDDS-1508. Provide example k8s deployment files for the new CSI server [Jun 20, 2019 4:40:29 PM] (weichiu) HDFS-14581. Appending to EC files crashes NameNode. Contributed by [Jun 20, 2019 4:42:45 PM] (github) HDDS-1579. Create OMDoubleBuffer metrics. (#871) -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore Unread field:TimelineEventSubDoc.java:[line 56] Unread field:TimelineMetricSubDoc.java:[line 44] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] Failed junit tests : hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.namenode.TestReconstructStripedBlocks hadoop.mapreduce.v2.app.TestRuntimeEstimators hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware hadoop.ozone.client.rpc.TestOzoneAtRestEncryption hadoop.ozone.client.rpc.TestFailureHandlingByClient hadoop.ozone.client.rpc.TestCommitWatcher hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis hadoop.ozone.client.rpc.TestOzoneRpcClient hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion hadoop.ozone.client.rpc.TestSecureOzoneRpcClient cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/diff-patch-hadolint.txt [8.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/diff-patch-pylint.txt [120K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/whitespace-eol.txt [9.6M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/whitespace-tabs.txt [1.1M] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-tony-runtime.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1174/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-yarnservice-runtime.txt [4.0K] javadoc:
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-common-project/hadoop-common Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient non-serializable instance field map In GlobalStorageStatistics.java:instance field map In GlobalStorageStatistics.java FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/whitespace-tabs.txt [1.2M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [280K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/359/art
Re: [VOTE] Release Apache Hadoop Submarine 0.2.0 - RC0
+1(binding) Verified basics installing a cluster. -rohith Sharma k S On Thu, 6 Jun 2019 at 18:53, Zhankun Tang wrote: > Hi folks, > > Thanks to all of you who have contributed in this submarine 0.2.0 release. > We now have a release candidate (RC0) for Apache Hadoop Submarine 0.2.0. > > > The Artifacts for this Submarine-0.2.0 RC0 are available here: > > https://home.apache.org/~ztang/submarine-0.2.0-rc0/ > > > It's RC tag in git is "submarine-0.2.0-RC0". > > > > The maven artifacts are available via repository.apache.org at > https://repository.apache.org/content/repositories/orgapachehadoop-1221/ > > > This vote will run 7 days (5 weekdays), ending on 13th June at 11:59 pm > PST. > > > > The highlights of this release. > > 1. Linkedin's TonY runtime support in Submarine > > 2. PyTorch enabled in Submarine with both YARN native service runtime > (single node) and TonY runtime > > 3. Support uber jar of Submarine to submit the job > > 4. The YAML file to describe a job > > 5. The Notebook support (by Apache Zeppelin Submarine interpreter) > > > Thanks to Sunil, Wangda, Xun, Zac, Keqiu, Szilard for helping me in > preparing the release. > > I have done a few testing with my pseudo cluster. My +1 (non-binding) to > start. > > > > Regards, > Zhankun >
Re: [VOTE] Release Apache Hadoop Submarine 0.2.0 - RC0
+1 (binding) Thanks Weiwei On Jun 21, 2019, 5:33 AM +0800, Wangda Tan , wrote: +1 Binding. Tested in local cluster and reviewed docs. Thanks! On Wed, Jun 19, 2019 at 3:20 AM Sunil Govindan wrote: +1 binding - tested in local cluster. - tried tony run time as well - doc seems fine now. - Sunil On Thu, Jun 6, 2019 at 6:53 PM Zhankun Tang wrote: Hi folks, Thanks to all of you who have contributed in this submarine 0.2.0 release. We now have a release candidate (RC0) for Apache Hadoop Submarine 0.2.0. The Artifacts for this Submarine-0.2.0 RC0 are available here: https://home.apache.org/~ztang/submarine-0.2.0-rc0/ It's RC tag in git is "submarine-0.2.0-RC0". The maven artifacts are available via repository.apache.org at https://repository.apache.org/content/repositories/orgapachehadoop-1221/ This vote will run 7 days (5 weekdays), ending on 13th June at 11:59 pm PST. The highlights of this release. 1. Linkedin's TonY runtime support in Submarine 2. PyTorch enabled in Submarine with both YARN native service runtime (single node) and TonY runtime 3. Support uber jar of Submarine to submit the job 4. The YAML file to describe a job 5. The Notebook support (by Apache Zeppelin Submarine interpreter) Thanks to Sunil, Wangda, Xun, Zac, Keqiu, Szilard for helping me in preparing the release. I have done a few testing with my pseudo cluster. My +1 (non-binding) to start. Regards, Zhankun