Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/ [Jul 18, 2019 4:46:27 AM] (github) HDDS-1689. Implement S3 Create Bucket request to use Cache and [Jul 18, 2019 9:10:45 AM] (github) HDDS-1481: Cleanup BasicOzoneFileSystem#mkdir (#1114) [Jul 18, 2019 9:18:13 AM] (github) HDDS-1767: ContainerStateMachine should have its own executors for [Jul 18, 2019 10:31:58 AM] (shashikant) HDDS-1780. TestFailureHandlingByClient tests are flaky. Contributed by [Jul 18, 2019 11:39:05 AM] (shashikant) HDDS-1654. Ensure container state on datanode gets synced to disk [Jul 18, 2019 12:15:18 PM] (stevel) MAPREDUCE-6521. MiniMRYarnCluster should not create [Jul 18, 2019 2:27:12 PM] (shashikant) HDDS-1779. TestWatchForCommit tests are flaky.Contributed by Shashikant [Jul 18, 2019 4:30:53 PM] (eyang) YARN-9568. Fixed NPE in MiniYarnCluster during [Jul 18, 2019 4:36:15 PM] (github) HDDS-1820. Fix numKeys metrics in OM HA. (#1116) [Jul 18, 2019 4:39:18 PM] (eyang) YARN-6046. Fixed documentation error in YarnApplicationSecurity. [Jul 18, 2019 8:28:03 PM] (xyao) HDDS-1822. NPE in SCMCommonPolicy.chooseDatanodes (#1120) [Jul 18, 2019 10:19:38 PM] (stevel) HADOOP-16437 documentation typo fix: fs.s3a.experimental.input.fadvise -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] Failed junit tests : hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap hadoop.hdfs.TestMaintenanceState hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.hdfs.server.federation.router.TestRouterRpc hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption hadoop.ozone.container.ozoneimpl.TestOzoneContainer cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/diff-patch-hadolint.txt [8.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/diff-patch-pylint.txt [212K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/whitespace-eol.txt [9.6M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1202/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.o
Re: Re: Any thoughts making Submarine a separate Apache project?
+1 (non-binding). Good initiative! A question to someone who has more insight on this area: how much effort would that mean - besides the straightforward maven work (modifying the pom.xmls). On Fri, Jul 19, 2019 at 10:40 AM dashuiguailu...@gmail.com < dashuiguailu...@gmail.com> wrote: > +1.Submarine is already in use at our company(贝壳找房) and is performing. > Looking forward to the next step to provide more features > > > > dashuiguailu...@gmail.com > > From: Oliver Hu > Date: 2019-07-19 07:50 > To: Jeff Zhang > CC: sid yu; Xun Liu; Hadoop Common; yarn-dev; Hdfs-dev; mapreduce-dev; > submarine-dev > Subject: Re: Any thoughts making Submarine a separate Apache project? > +1 (non-binding). Make Submarine a separate project would make it easier to > integrate with other components in the ML pipeline and expand cross > platform. > > On Thu, Jul 18, 2019 at 2:48 AM Jeff Zhang wrote: > > > +1, This is Jeff Zhang from Zeppelin community. > > Thanks Xun for bring this up. Submarine has been integrated into Zeppelin > > several months ago, and I already see some early adoption of that in > China. > > AI is fast growing area, I believe moving into a separate project would > be > > helpful for Submarine to catch up with the new trend of AI and release > more > > new features quickly than before. > > > > > > > > sid yu 于2019年7月18日周四 下午2:06写道: > > > > > +1 We are look forward to it. The idea is great. > > > > > > > On Jul 10, 2019, at 3:34 PM, Xun Liu wrote: > > > > > > > > Hi all, > > > > > > > > This is Xun Liu contributing to the Submarine project for deep > learning > > > > workloads running with big data workloads together on Hadoop > clusters. > > > > > > > > There are a bunch of integrations of Submarine to other projects are > > > > finished or going on, such as Apache Zeppelin, TonY, Azkaban. The > next > > > step > > > > of Submarine is going to integrate with more projects like Apache > > Arrow, > > > > Redis, MLflow, etc. & be able to handle end-to-end machine learning > use > > > > cases like model serving, notebook management, advanced training > > > > optimizations (like auto parameter tuning, memory cache optimizations > > for > > > > large datasets for training, etc.), and make it run on other > platforms > > > like > > > > Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY > > > project > > > > to Apache so we can put Submarine and TonY together to the same > > codebase > > > > (Page #30. > > > > > > > > > > https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30 > > > > ). > > > > > > > > This expands the scope of the original Submarine project in exciting > > new > > > > ways. Toward that end, would it make sense to create a separate > > Submarine > > > > project at Apache? This can make faster adoption of Submarine, and > > allow > > > > Submarine to grow to a full-blown machine learning platform. > > > > > > > > There will be lots of technical details to work out, but any initial > > > > thoughts on this? > > > > > > > > Best Regards, > > > > Xun Liu > > > > > > > > > - > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > > > > > > -- > > Best Regards > > > > Jeff Zhang > > >
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/ [Jul 19, 2019 12:49:44 AM] (iwasakims) MAPREDUCE-6521. MiniMRYarnCluster should not create -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.nodemanager.containermanager.TestContainerManager hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers hadoop.mapred.TestReduceFetchFromPartialMem hadoop.tools.TestDistCpViewFs hadoop.tools.TestHadoopArchives cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/whitespace-tabs.txt [1.2M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/387/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K]
Re: Re: Any thoughts making Submarine a separate Apache project?
+1.Submarine is already in use at our company(贝壳找房) and is performing. Looking forward to the next step to provide more features dashuiguailu...@gmail.com From: Oliver Hu Date: 2019-07-19 07:50 To: Jeff Zhang CC: sid yu; Xun Liu; Hadoop Common; yarn-dev; Hdfs-dev; mapreduce-dev; submarine-dev Subject: Re: Any thoughts making Submarine a separate Apache project? +1 (non-binding). Make Submarine a separate project would make it easier to integrate with other components in the ML pipeline and expand cross platform. On Thu, Jul 18, 2019 at 2:48 AM Jeff Zhang wrote: > +1, This is Jeff Zhang from Zeppelin community. > Thanks Xun for bring this up. Submarine has been integrated into Zeppelin > several months ago, and I already see some early adoption of that in China. > AI is fast growing area, I believe moving into a separate project would be > helpful for Submarine to catch up with the new trend of AI and release more > new features quickly than before. > > > > sid yu 于2019年7月18日周四 下午2:06写道: > > > +1 We are look forward to it. The idea is great. > > > > > On Jul 10, 2019, at 3:34 PM, Xun Liu wrote: > > > > > > Hi all, > > > > > > This is Xun Liu contributing to the Submarine project for deep learning > > > workloads running with big data workloads together on Hadoop clusters. > > > > > > There are a bunch of integrations of Submarine to other projects are > > > finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next > > step > > > of Submarine is going to integrate with more projects like Apache > Arrow, > > > Redis, MLflow, etc. & be able to handle end-to-end machine learning use > > > cases like model serving, notebook management, advanced training > > > optimizations (like auto parameter tuning, memory cache optimizations > for > > > large datasets for training, etc.), and make it run on other platforms > > like > > > Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY > > project > > > to Apache so we can put Submarine and TonY together to the same > codebase > > > (Page #30. > > > > > > https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30 > > > ). > > > > > > This expands the scope of the original Submarine project in exciting > new > > > ways. Toward that end, would it make sense to create a separate > Submarine > > > project at Apache? This can make faster adoption of Submarine, and > allow > > > Submarine to grow to a full-blown machine learning platform. > > > > > > There will be lots of technical details to work out, but any initial > > > thoughts on this? > > > > > > Best Regards, > > > Xun Liu > > > > > > - > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > > -- > Best Regards > > Jeff Zhang >