Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/ [Jul 28, 2019 3:11:42 AM] (ayushsaxena) HDFS-14660. [SBN Read] ObserverNameNode should throw StandbyException -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] FindBugs : module:hadoop-tools/hadoop-aws Inconsistent synchronization of org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider; locked 75% of time Unsynchronized access at LocalMetadataStore.java:75% of time Unsynchronized access at LocalMetadataStore.java:[line 623] Failed junit tests : hadoop.util.TestReadWriteDiskValidator hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken hadoop.yarn.applications.distributedshell.TestDistributedShell cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-patch-pylint.txt [216K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/whitespace-eol.txt [9.6M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/xml.txt [16K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-ozone_client.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
Re: Any thoughts making Submarine a separate Apache project?
Thanks Vinod, the proposal to make it be TLP definitely a great suggestion. I will draft a proposal and keep the thread posted. Best, Wangda On Mon, Jul 29, 2019 at 3:46 PM Vinod Kumar Vavilapalli wrote: > Looks like there's a meaningful push behind this. > > Given the desire is to fork off Apache Hadoop, you'd want to make sure > this enthusiasm turns into building a real, independent but more > importantly a sustainable community. > > Given that there were two official releases off the Apache Hadoop project, > I doubt if you'd need to go through the incubator process. Instead you can > directly propose a new TLP at ASF board. The last few times this happened > was with ORC, and long before that with Hive, HBase etc. Can somebody who > have cycles and been on the ASF lists for a while look into the process > here? > > For the Apache Hadoop community, this will be treated simply as > code-change and so need a committer +1? You can be more gently by formally > doing a vote once a process doc is written down. > > Back to the sustainable community point, as part of drafting this > proposal, you'd definitely want to make sure all of the Apache Hadoop > PMC/Committers can exercise their will to join this new project as > PMC/Committers respectively without any additional constraints. > > Thanks > +Vinod > > > On Jul 25, 2019, at 1:31 PM, Wangda Tan wrote: > > > > Thanks everybody for sharing your thoughts. I saw positive feedbacks from > > 20+ contributors! > > > > So I think we should move it forward, any suggestions about what we > should > > do? > > > > Best, > > Wangda > > > > On Mon, Jul 22, 2019 at 5:36 PM neo wrote: > > > >> +1, This is neo from TiDB & TiKV community. > >> Thanks Xun for bring this up. > >> > >> Our CNCF project's open source distributed KV storage system TiKV, > >> Hadoop submarine's machine learning engine helps us to optimize data > >> storage, > >> helping us solve some problems in data hotspots and data shuffers. > >> > >> We are ready to improve the performance of TiDB in our open source > >> distributed relational database TiDB and also using the hadoop submarine > >> machine learning engine. > >> > >> I think if submarine can be independent, it will develop faster and > better. > >> Thanks to the hadoop community for developing submarine! > >> > >> Best Regards, > >> neo > >> www.pingcap.com / https://github.com/pingcap/tidb / > >> https://github.com/tikv > >> > >> Xun Liu 于2019年7月22日周一 下午4:07写道: > >> > >>> @adam.antal > >>> > >>> The submarine development team has completed the following > preparations: > >>> 1. Established a temporary test repository on Github. > >>> 2. Change the package name of hadoop submarine from > org.hadoop.submarine > >> to > >>> org.submarine > >>> 3. Combine the Linkedin/TonY code into the Hadoop submarine module; > >>> 4. On the Github docked travis-ci system, all test cases have been > >> tested; > >>> 5. Several Hadoop submarine users completed the system test using the > >> code > >>> in this repository. > >>> > >>> 赵欣 于2019年7月22日周一 上午9:38写道: > >>> > Hi > > I am a teacher at Southeast University (https://www.seu.edu.cn/). We > >> are > a major in electrical engineering. Our teaching teams and students use > bigoop submarine for big data analysis and automation control of > >>> electrical > equipment. > > Many thanks to the hadoop community for providing us with machine > >>> learning > tools like submarine. > > I wish hadoop submarine is getting better and better. > > > == > 赵欣 > 东南大学电气工程学院 > > - > > Zhao XIN > > School of Electrical Engineering > > == > 2019-07-18 > > > *From:* Xun Liu > *Date:* 2019-07-18 09:46 > *To:* xinzhao > *Subject:* Fwd: Re: Any thoughts making Submarine a separate Apache > project? > > > -- Forwarded message - > 发件人: dashuiguailu...@gmail.com > Date: 2019年7月17日周三 下午3:17 > Subject: Re: Re: Any thoughts making Submarine a separate Apache > >> project? > To: Szilard Nemeth , runlin zhang < > runlin...@gmail.com> > Cc: Xun Liu , common-dev < > >>> common-...@hadoop.apache.org>, > yarn-dev , hdfs-dev < > hdfs-...@hadoop.apache.org>, mapreduce-dev < > mapreduce-dev@hadoop.apache.org>, submarine-dev < > submarine-...@hadoop.apache.org> > > > +1 ,Good idea, we are very much looking forward to it. > > -- > dashuiguailu...@gmail.com > > > *From:* Szilard Nemeth > *Date:* 2019-07-17 14:55 > *To:* runlin zhang > *CC:* Xun Liu ; Hadoop Common > ; yarn-dev >; > Hdfs-dev ; mapreduce-dev > ; submarine-dev > > *Subject:* Re: Any thoughts making Submarine a separate Apac
Re: Any thoughts making Submarine a separate Apache project?
Looks like there's a meaningful push behind this. Given the desire is to fork off Apache Hadoop, you'd want to make sure this enthusiasm turns into building a real, independent but more importantly a sustainable community. Given that there were two official releases off the Apache Hadoop project, I doubt if you'd need to go through the incubator process. Instead you can directly propose a new TLP at ASF board. The last few times this happened was with ORC, and long before that with Hive, HBase etc. Can somebody who have cycles and been on the ASF lists for a while look into the process here? For the Apache Hadoop community, this will be treated simply as code-change and so need a committer +1? You can be more gently by formally doing a vote once a process doc is written down. Back to the sustainable community point, as part of drafting this proposal, you'd definitely want to make sure all of the Apache Hadoop PMC/Committers can exercise their will to join this new project as PMC/Committers respectively without any additional constraints. Thanks +Vinod > On Jul 25, 2019, at 1:31 PM, Wangda Tan wrote: > > Thanks everybody for sharing your thoughts. I saw positive feedbacks from > 20+ contributors! > > So I think we should move it forward, any suggestions about what we should > do? > > Best, > Wangda > > On Mon, Jul 22, 2019 at 5:36 PM neo wrote: > >> +1, This is neo from TiDB & TiKV community. >> Thanks Xun for bring this up. >> >> Our CNCF project's open source distributed KV storage system TiKV, >> Hadoop submarine's machine learning engine helps us to optimize data >> storage, >> helping us solve some problems in data hotspots and data shuffers. >> >> We are ready to improve the performance of TiDB in our open source >> distributed relational database TiDB and also using the hadoop submarine >> machine learning engine. >> >> I think if submarine can be independent, it will develop faster and better. >> Thanks to the hadoop community for developing submarine! >> >> Best Regards, >> neo >> www.pingcap.com / https://github.com/pingcap/tidb / >> https://github.com/tikv >> >> Xun Liu 于2019年7月22日周一 下午4:07写道: >> >>> @adam.antal >>> >>> The submarine development team has completed the following preparations: >>> 1. Established a temporary test repository on Github. >>> 2. Change the package name of hadoop submarine from org.hadoop.submarine >> to >>> org.submarine >>> 3. Combine the Linkedin/TonY code into the Hadoop submarine module; >>> 4. On the Github docked travis-ci system, all test cases have been >> tested; >>> 5. Several Hadoop submarine users completed the system test using the >> code >>> in this repository. >>> >>> 赵欣 于2019年7月22日周一 上午9:38写道: >>> Hi I am a teacher at Southeast University (https://www.seu.edu.cn/). We >> are a major in electrical engineering. Our teaching teams and students use bigoop submarine for big data analysis and automation control of >>> electrical equipment. Many thanks to the hadoop community for providing us with machine >>> learning tools like submarine. I wish hadoop submarine is getting better and better. == 赵欣 东南大学电气工程学院 - Zhao XIN School of Electrical Engineering == 2019-07-18 *From:* Xun Liu *Date:* 2019-07-18 09:46 *To:* xinzhao *Subject:* Fwd: Re: Any thoughts making Submarine a separate Apache project? -- Forwarded message - 发件人: dashuiguailu...@gmail.com Date: 2019年7月17日周三 下午3:17 Subject: Re: Re: Any thoughts making Submarine a separate Apache >> project? To: Szilard Nemeth , runlin zhang < runlin...@gmail.com> Cc: Xun Liu , common-dev < >>> common-...@hadoop.apache.org>, yarn-dev , hdfs-dev < hdfs-...@hadoop.apache.org>, mapreduce-dev < mapreduce-dev@hadoop.apache.org>, submarine-dev < submarine-...@hadoop.apache.org> +1 ,Good idea, we are very much looking forward to it. -- dashuiguailu...@gmail.com *From:* Szilard Nemeth *Date:* 2019-07-17 14:55 *To:* runlin zhang *CC:* Xun Liu ; Hadoop Common ; yarn-dev ; Hdfs-dev ; mapreduce-dev ; submarine-dev *Subject:* Re: Any thoughts making Submarine a separate Apache project? +1, this is a very great idea. As Hadoop repository has already grown huge and contains many >> projects, I think in general it's a good idea to separate projects in the early >>> phase. On Wed, Jul 17, 2019, 08:50 runlin zhang wrote: > +1 ,That will be great ! > >> 在 2019年7月10日,下午3:34,Xun Liu 写道: >> >> Hi all, >> >> This is Xun Liu contributing to the Submarine project
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.server.namenode.TestDecommissioningStatus hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.mapreduce.v2.app.TestRecovery cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/whitespace-tabs.txt [1.2M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [240K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [20K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [72K] https://