Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-07-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/

[Jul 29, 2019 8:27:55 AM] (elek) HDDS-1867. Invalid Prometheus metric name from 
JvmMetrics
[Jul 29, 2019 8:46:11 AM] (elek) HDDS-1852. Fix typo in TestOmAcls
[Jul 29, 2019 9:04:45 AM] (elek) HDDS-1682. TestEventWatcher.testMetrics is 
flaky
[Jul 29, 2019 9:54:36 AM] (elek) HDDS-1725. pv-test example to test csi is not 
working
[Jul 29, 2019 12:55:12 PM] (weichiu) HDFS-12967. NNBench should support 
multi-cluster access. Contributed by
[Jul 29, 2019 4:05:24 PM] (eyang) HDDS-1833. Moved RefCountedDB stacktrace to 
log level trace.   
[Jul 29, 2019 4:39:40 PM] (arp7) HDDS-1391 : Add ability in OM to serve delta 
updates through an API.
[Jul 29, 2019 6:00:22 PM] (elgoiri) HDFS-14670: RBF: Create secret manager 
instance using
[Jul 29, 2019 8:50:14 PM] (xkrogen) HDFS-14639. [Dynamometer] Remove 
unnecessary duplicate directory from
[Jul 29, 2019 9:31:34 PM] (weichiu) HDFS-14429. Block remain in COMMITTED but 
not COMPLETE caused by
[Jul 29, 2019 11:42:00 PM] (bharat) HDDS-1829 On OM reload/restart 
OmMetrics#numKeys should be updated
[Jul 29, 2019 11:44:48 PM] (github) Revert HDDS-1829 On OM reload/restart 
OmMetrics#numKeys should be
[Jul 30, 2019 12:37:26 AM] (aajisaka) HADOOP-16435. RpcMetrics should not 
retained forever. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-tools/hadoop-aws 
   Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider; locked 75% 
of time Unsynchronized access at LocalMetadataStore.java:75% of time 
Unsynchronized access at LocalMetadataStore.java:[line 623] 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/diff-patch-pylint.txt
  [216K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1213/artifact/out/whitespace-tabs.txt
  [1.1M]

 

Re: [DISCUSS] EOL 2.8 or another 2.8.x release?

2019-07-30 Thread 俊平堵
+1 for 1 more release in 2.8.x. However, it sounds too early to declare 2.8
is EoL. I would prefer to discuss this after seeing most people are
migrated to 2.9 or 3.x.

Thanks,

Junping

Akira Ajisaka  于2019年7月26日周五 下午12:03写道:

> I'm +1 for 1 more release in 2.8.x and declare that 2.8 is EoL.
>
> > would be even happier if we could move people to 2.9.x
> Agreed.
>
> -Akira
>
> On Thu, Jul 25, 2019 at 10:59 PM Steve Loughran
>  wrote:
> >
> > I'm in favour of 1 more release (it fixes the off-by 1 bug in
> > S3AInputStream HADOOP-16109), but would be even happier if we could move
> > people to 2.9.x
> >
> > maybe do a 2.9.x release and declare that 2.8 is EOL?
> >
> >
> > On Thu, Jul 25, 2019 at 2:08 PM Wei-Chiu Chuang 
> wrote:
> >
> > > My bad -- Didn't realize I was looking at the old Hadoop page.
> > > Here's the correct list of releases.
> > > https://hadoop.apache.org/releases.html
> > >
> > > On Thu, Jul 25, 2019 at 12:49 AM 张铎(Duo Zhang) 
> > > wrote:
> > >
> > > > IIRC we have a 2.8.5 release?
> > > >
> > > > On the download page:
> > > >
> > > > 2.8.5 2018 Sep 15
> > > >
> > > > Wei-Chiu Chuang  于2019年7月25日周四 上午9:39写道:
> > > >
> > > > > The last 2.8 release (2.8.4) was made in the last May, more than a
> year
> > > > > ago. https://hadoop.apache.org/old/releases.html
> > > > >
> > > > > How do folks feel about the fate of branch-2.8? During the last
> > > community
> > > > > meetup in June, it sounds like most users are still on 2.8 or even
> 2.7,
> > > > so
> > > > > I don't think we want to abandon 2.8 just yet.
> > > > >
> > > > > I would personally want to urge folks to move up to 3.x, so I can
> stop
> > > > > cherrypicking stuff all the way down into 2.8. But it's not up to
> me
> > > > along
> > > > > to decide :)
> > > > >
> > > > > How do people feel about having another 2.8 release or two? I am
> not
> > > > saying
> > > > > I want to drive it, but I want to raise the awareness that folks
> are
> > > > still
> > > > > on 2.8 and there's not been an update for over a year.
> > > > >
> > > > > Thoughts?
> > > > >
> > > >
> > >
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-07-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/

[Jul 29, 2019 9:34:31 PM] (weichiu) HDFS-14429. Block remain in COMMITTED but 
not COMPLETE caused by
[Jul 29, 2019 9:36:57 PM] (xkrogen) HDFS-12703. Exceptions are fatal to 
decommissioning monitor. Contributed
[Jul 30, 2019 12:40:14 AM] (aajisaka) HADOOP-16435. RpcMetrics should not 
retained forever. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.namenode.TestEditLogRace 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [284K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/398/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  

Re: Any thoughts making Submarine a separate Apache project?

2019-07-30 Thread 俊平堵
Thanks Vinod for these great suggestions. I agree most of your comments
above.
 "For the Apache Hadoop community, this will be treated simply as
code-change and so need a committer +1?". IIUC, this should be treated as
feature branch merge, so may be 3 committer +1 is needed here according to
https://hadoop.apache.org/bylaws.html?

bq. Can somebody who have cycles and been on the ASF lists for a while look
into the process here?
I can check with ASF members who has experience on this if no one haven't
yet.

Thanks,

Junping

Vinod Kumar Vavilapalli  于2019年7月29日周一 下午9:46写道:

> Looks like there's a meaningful push behind this.
>
> Given the desire is to fork off Apache Hadoop, you'd want to make sure
> this enthusiasm turns into building a real, independent but more
> importantly a sustainable community.
>
> Given that there were two official releases off the Apache Hadoop project,
> I doubt if you'd need to go through the incubator process. Instead you can
> directly propose a new TLP at ASF board. The last few times this happened
> was with ORC, and long before that with Hive, HBase etc. Can somebody who
> have cycles and been on the ASF lists for a while look into the process
> here?
>
> For the Apache Hadoop community, this will be treated simply as
> code-change and so need a committer +1? You can be more gently by formally
> doing a vote once a process doc is written down.
>
> Back to the sustainable community point, as part of drafting this
> proposal, you'd definitely want to make sure all of the Apache Hadoop
> PMC/Committers can exercise their will to join this new project as
> PMC/Committers respectively without any additional constraints.
>
> Thanks
> +Vinod
>
> > On Jul 25, 2019, at 1:31 PM, Wangda Tan  wrote:
> >
> > Thanks everybody for sharing your thoughts. I saw positive feedbacks from
> > 20+ contributors!
> >
> > So I think we should move it forward, any suggestions about what we
> should
> > do?
> >
> > Best,
> > Wangda
> >
> > On Mon, Jul 22, 2019 at 5:36 PM neo  wrote:
> >
> >> +1, This is neo from TiDB & TiKV community.
> >> Thanks Xun for bring this up.
> >>
> >> Our CNCF project's open source distributed KV storage system TiKV,
> >> Hadoop submarine's machine learning engine helps us to optimize data
> >> storage,
> >> helping us solve some problems in data hotspots and data shuffers.
> >>
> >> We are ready to improve the performance of TiDB in our open source
> >> distributed relational database TiDB and also using the hadoop submarine
> >> machine learning engine.
> >>
> >> I think if submarine can be independent, it will develop faster and
> better.
> >> Thanks to the hadoop community for developing submarine!
> >>
> >> Best Regards,
> >> neo
> >> www.pingcap.com / https://github.com/pingcap/tidb /
> >> https://github.com/tikv
> >>
> >> Xun Liu  于2019年7月22日周一 下午4:07写道:
> >>
> >>> @adam.antal
> >>>
> >>> The submarine development team has completed the following
> preparations:
> >>> 1. Established a temporary test repository on Github.
> >>> 2. Change the package name of hadoop submarine from
> org.hadoop.submarine
> >> to
> >>> org.submarine
> >>> 3. Combine the Linkedin/TonY code into the Hadoop submarine module;
> >>> 4. On the Github docked travis-ci system, all test cases have been
> >> tested;
> >>> 5. Several Hadoop submarine users completed the system test using the
> >> code
> >>> in this repository.
> >>>
> >>> 赵欣  于2019年7月22日周一 上午9:38写道:
> >>>
>  Hi
> 
>  I am a teacher at Southeast University (https://www.seu.edu.cn/). We
> >> are
>  a major in electrical engineering. Our teaching teams and students use
>  bigoop submarine for big data analysis and automation control of
> >>> electrical
>  equipment.
> 
>  Many thanks to the hadoop community for providing us with machine
> >>> learning
>  tools like submarine.
> 
>  I wish hadoop submarine is getting better and better.
> 
> 
>  ==
>  赵欣
>  东南大学电气工程学院
> 
>  -
> 
>  Zhao XIN
> 
>  School of Electrical Engineering
> 
>  ==
>  2019-07-18
> 
> 
>  *From:* Xun Liu 
>  *Date:* 2019-07-18 09:46
>  *To:* xinzhao 
>  *Subject:* Fwd: Re: Any thoughts making Submarine a separate Apache
>  project?
> 
> 
>  -- Forwarded message -
>  发件人: dashuiguailu...@gmail.com 
>  Date: 2019年7月17日周三 下午3:17
>  Subject: Re: Re: Any thoughts making Submarine a separate Apache
> >> project?
>  To: Szilard Nemeth , runlin zhang <
>  runlin...@gmail.com>
>  Cc: Xun Liu , common-dev <
> >>> common-...@hadoop.apache.org>,
>  yarn-dev , hdfs-dev <
>  hdfs-...@hadoop.apache.org>, mapreduce-dev <
>  mapreduce-dev@hadoop.apache.org>, submarine-dev <
>  submarine-...@hadoop.apache.org>
> 
> 
>  +1 ,Good idea, we are very much looking forward to it