Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-19 Thread Dinesh Chitlangia
Thanks Runzhi for your contribution in HDDS-3041.

As discussed on Ozone Slack channel, including this fix will require a
ratis release. This will delay the Ozone release by another 4-5 weeks
atleast.
Thus we agreed to proceed with release without HDDS-3041 and then follow up
with a minor release 0.5.1 to include this fix.

Thank you for your understanding & cooperation.
Looking forward to future collaboration!

Regards,
Dinesh

On Thu, Mar 19, 2020 at 3:40 AM runzhiwang  wrote:

>
> Hi Dinesh Chitlangia,
>
>   I think 0.5.0-beta RC2 is short of the commit: HDDS-3041. Memory leak of
> s3g(#637). Contributed by Runzhi Wang, the commitID is
> 37a626064dd8ab4d435cc2c95cfec7091dc50226.
>   If without this commit, make a stress testing on ozone cluster, s3g will
> memory leak and the worst is the cpu of datanode will be 100% as the image
> shows, the root cause is the bug JDK-8129861.
>
> Thanks,
> Runzhi Wang
>
>
> -- 原始邮件 --
> *发件人:* "Dinesh Chitlangia";
> *发送时间:* 2020年3月16日(星期一) 上午10:27
> *收件人:* "Hadoop Common";"Hdfs-dev"<
> hdfs-...@hadoop.apache.org>;"ozone-dev" >;"yarn-dev";"mapreduce-dev"<
> mapreduce-...@hadoop.apache.org>;
> *主题:* [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2
>
> Hi Folks,
>
> We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta.
>
> The RC artifacts are at:
> https://home.apache.org/~dineshc/ozone-0.5.0-rc2/
>
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1262
>
> The RC tag in git is at:
> https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2
>
> This release contains 800+ fixes/improvements [1].
> Thanks to everyone who put in the effort to make this happen.
>
> *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm PST.*
>
> Note: This release is beta quality, it’s not recommended to use in
> production but we believe that it’s stable enough to try out the feature
> set and collect feedback.
>
>
> [1] https://s.apache.org/ozone-0.5.0-fixed-issues
>
> Thanks,
> Dinesh Chitlangia
>
>


Re: Next week's Hadoop storage community call: Consistent Read from Standby

2020-03-19 Thread Wei-Chiu Chuang
Thanks for Konstantin's talk and every one for joining the discussion.

Here's the recording for those who cannot join:
https://cloudera.zoom.us/rec/play/v5UkJOD7_243EtfG5ASDAP4tW9W7KKqsg3AeqKYKmBuwBSVXYAKmM-REa-fuUF5xWSRdWlD3Q1fcJTlT?continueMode=true



On Wed, Mar 18, 2020 at 8:52 AM Wei-Chiu Chuang  wrote:

> Just a reminder this is happening in about 2 hours.
>
> On Fri, Mar 13, 2020 at 4:20 PM Wei-Chiu Chuang 
> wrote:
>
>> Hi!
>>
>> Consistent Read from Standby is one of the major features that's going to
>> land in the upcoming Hadoop 3.3.0 release.
>>
>> I'm happy to announce that Konstantin graciously agreed to talk about the
>> production experience with this feature.
>>
>> Please note that we will start this call at 11am pacific time, instead of
>> the 10am like before. I asked Konstantin to allocate an hour of time slot
>> to leave time for Q&A.
>>
>> Date/time:
>> March 18 11am pacific time, 6pm GMT
>>
>> Please join via Zoom:
>> https://cloudera.zoom.us/j/880548968
>>
>> Past meeting minutes:
>>
>> https://docs.google.com/document/d/1jXM5Ujvf-zhcyw_5kiQVx6g-HeKe-YGnFS_1-qFXomI/edit
>>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-03-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1443/

[Mar 18, 2020 11:48:52 AM] (github) HADOOP-16858. S3Guard fsck: Add option to 
remove orphaned entries
[Mar 18, 2020 1:27:13 PM] (github) HDFS-15208. Suppress bogus 
AbstractWadlGeneratorGrammarGenerator in KMS
[Mar 18, 2020 1:44:44 PM] (github) HADOOP-16054. Update Dockerfile to use 
Bionic.
[Mar 18, 2020 2:14:18 PM] (github) HADOOP-16920 ABFS: Make list page size 
configurable.
[Mar 18, 2020 3:30:45 PM] (ayushsaxena) HDFS-14919. Provide Non DFS Used per 
DataNode in DataNode UI.




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed CTEST tests :

   remote_block_reader 
   memcheck_remote_block_reader 
   bad_datanode 
   memcheck_bad_datanode 

Failed junit tests :

   hadoop.io.compress.TestCompressorDecompressor 
   hadoop.io.compress.snappy.TestSnappyCompressorDecompressor 
   hadoop.hdfs.TestDeadNodeDetection 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.TestMapreduceConfigFields 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1443/artifact/out/diff-compile-cc-root.txt
  [32K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1443/artifact/out/diff-compile-javac-root.txt
  [428K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1443/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1443/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1443/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1443/artifact/out/diff-patch-shellcheck.txt
  [20K]

   sh

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-03-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.sls.TestSLSRunner 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/629/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  [32K]
   
https://builds.apache.org/job/