Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-22 Thread Ayush Saxena
+1(non binding)
*Built from source
*Verified Checksums
*Ran some basic Shell Commands.

Thanx Dinesh for driving the release.

-Ayush

On Mon, 16 Mar 2020 at 07:57, Dinesh Chitlangia 
wrote:

> Hi Folks,
>
> We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta.
>
> The RC artifacts are at:
> https://home.apache.org/~dineshc/ozone-0.5.0-rc2/
>
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1262
>
> The RC tag in git is at:
> https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2
>
> This release contains 800+ fixes/improvements [1].
> Thanks to everyone who put in the effort to make this happen.
>
> *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm PST.*
>
> Note: This release is beta quality, it’s not recommended to use in
> production but we believe that it’s stable enough to try out the feature
> set and collect feedback.
>
>
> [1] https://s.apache.org/ozone-0.5.0-fixed-issues
>
> Thanks,
> Dinesh Chitlangia
>


Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-22 Thread Xiaoyu Yao
+1 binding
Download source and verify signature.
Verify build and documents.
Deployed an 11 node cluster (3 om with ha, 6 datanodes, 1 scm and 1 s3g)
Verify multiple RATIS-3 pipelines are created as expected.
Tried ozone shell commands via o3 and o3fs, focus on security and HA
related.
 Only find a few minor issues that we can fix in followup JIRAs.
1) ozone getconf -ozonemanagers does not return all the om instances
bash-4.2$ ozone getconf -ozonemanagers
0.0.0.0
2) The document on specifying service/ID can be improved. More
specifically, the URI should give examples for the Service ID in HA.
Currently, it only mentions host/port.

ozone sh vol create /vol1
Service ID or host name must not be omitted when ozone.om.service.ids is
defined.
bash-4.2$ ozone sh vol create --help
Usage: ozone sh volume create [-hV] [--root] [-q=] [-u=]

Creates a volume for the specified user
 URI of the volume.
  Ozone URI could start with o3:// or without
prefix. URI
may contain the host and port of the OM server.
Both are
optional. If they are not specified it will be
identified from the config files.
3). ozone scmcli container list seems report incorrect numberOfKeys and
usedBytes
Also, container owner is set as the current leader om(om3), should we use
the om service id here instead?
bash-4.2$ ozone scmcli container list
{
  "state" : "OPEN",
  "replicationFactor" : "THREE",
  "replicationType" : "RATIS",
  "usedBytes" : 3813,
  "numberOfKeys" : 1,
...
bash-4.2$ ozone sh key list o3://id1/vol1/bucket1/
{
  "volumeName" : "vol1",
  "bucketName" : "bucket1",
  "name" : "k1",
  "dataSize" : 3813,
  "creationTime" : "2020-03-23T03:23:30.670Z",
  "modificationTime" : "2020-03-23T03:23:33.207Z",
  "replicationType" : "RATIS",
  "replicationFactor" : 3
}
{
  "volumeName" : "vol1",
  "bucketName" : "bucket1",
  "name" : "k2",
  "dataSize" : 3813,
  "creationTime" : "2020-03-23T03:18:46.735Z",
  "modificationTime" : "2020-03-23T03:20:15.005Z",
  "replicationType" : "RATIS",
  "replicationFactor" : 3
}


Run freon with random key generation.

Thanks Dinesh for driving the the release of Beta RC2.

Xiaoyu

On Sun, Mar 22, 2020 at 2:51 PM Aravindan Vijayan
 wrote:

> +1
> Deployed a 3 node cluster
> Tried ozone shell and filesystem commands
> Ran freon load generator
>
> Thanks Dinesh for working on the RC2.
>
> On Sun, Mar 15, 2020 at 7:27 PM Dinesh Chitlangia 
> wrote:
>
> > Hi Folks,
> >
> > We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta.
> >
> > The RC artifacts are at:
> > https://home.apache.org/~dineshc/ozone-0.5.0-rc2/
> >
> > The public key used for signing the artifacts can be found at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > The maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1262
> >
> > The RC tag in git is at:
> > https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2
> >
> > This release contains 800+ fixes/improvements [1].
> > Thanks to everyone who put in the effort to make this happen.
> >
> > *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm
> PST.*
> >
> > Note: This release is beta quality, it’s not recommended to use in
> > production but we believe that it’s stable enough to try out the feature
> > set and collect feedback.
> >
> >
> > [1] https://s.apache.org/ozone-0.5.0-fixed-issues
> >
> > Thanks,
> > Dinesh Chitlangia
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-03-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1447/

[Mar 22, 2020 6:14:18 AM] (ayushsaxena) HDFS-15227. NPE if the last block 
changes from COMMITTED to COMPLETE

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

[jira] [Resolved] (YARN-10205) NodeManager stateful restart feature did not work as expected - information only (Resolved)

2020-03-22 Thread Anil Sadineni (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Sadineni resolved YARN-10205.
--
Resolution: Not A Problem

> NodeManager stateful restart feature did not work as expected - information 
> only (Resolved)
> ---
>
> Key: YARN-10205
> URL: https://issues.apache.org/jira/browse/YARN-10205
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: graceful, nodemanager, rolling upgrade, yarn
>Reporter: Anil Sadineni
>Priority: Major
>
> *TL;DR* This is information only Jira on stateful restart of node manager 
> feature. Unexpected behavior of this feature was due to systemd process 
> configurations in this case. Please read below for more details - 
> Stateful restart of Node Manager(YARN-1336) i introduced in Hadoop 2.6. This 
> feature worked as expected in Hadoop 2.6 for us. Recently we upgraded our 
> clusters from 2.6 to 2.9.2 along with some OS upgrades. This feature was 
> broken after the upgrade. one of the initial suspicion was 
> LinuxContainerExecutor as we started using it in this upgrade. 
> yarn-site.xml has all required configurations to enable this feature - 
> {{yarn.nodemanager.recovery.enabled: 'true'}}
> {{yarn.nodemanager.recovery.dir:''}}
> {{yarn.nodemanager.recovery.supervised: 'true'}}
> {{yarn.nodemanager.address: '0.0.0.0:8041'}}
> While containers running and NM restarted, below is the exception constantly 
> observed in Node Manager logs - 
> {quote}
> java.io.IOException: *Timeout while waiting for exit code from 
> container_e37_1583181000856_0008_01_43*2020-03-05 17:45:18,241 ERROR 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch:
>  Unable to recover container container_e37_1583181000856_0008_01_43
> {quote}
> {quote}at 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.reacquireContainer(ContainerExecutor.java:274)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.reacquireContainer(LinuxContainerExecutor.java:631)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:84)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2020-03-05 17:45:18,241 ERROR 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch:
>  Unable to recover container container_e37_1583181000856_0008_01_18
> java.io.IOException: Timeout while waiting for exit code from 
> container_e37_1583181000856_0008_01_18
> at 
> org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.reacquireContainer(ContainerExecutor.java:274)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.reacquireContainer(LinuxContainerExecutor.java:631)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:84)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 2020-03-05 17:45:18,242 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch:
>  Recovered container exited with a non-zero exit code 154
> {quote}
> {quote}2020-03-05 17:45:18,243 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch:
>  Recovered container exited with a non-zero exit code 154
> {quote}
> After some digging on what was causing exitfile missing, at OS level 
> identified that running container processes are going down as soon as NM is 
> going down. Process tree looks perfectly fine as the container-executor takes 
> care of forking child process as expected.  Dig deeper into various parts of 
> code to see if anything caused the failure. 
> One question was did we break anything in our internal repo after we forked 
> 2.9.2 from open source. Started looking into code at different areas like NM 
> shutdown h

[jira] [Created] (YARN-10205) NodeManager stateful restart feature did not work as expected - information only (Resolved)

2020-03-22 Thread Anil Sadineni (Jira)
Anil Sadineni created YARN-10205:


 Summary: NodeManager stateful restart feature did not work as 
expected - information only (Resolved)
 Key: YARN-10205
 URL: https://issues.apache.org/jira/browse/YARN-10205
 Project: Hadoop YARN
  Issue Type: Test
  Components: graceful, nodemanager, rolling upgrade, yarn
Reporter: Anil Sadineni


*TL;DR* This is information only Jira on stateful restart of node manager 
feature. Unexpected behavior of this feature was due to systemd process 
configurations in this case. Please read below for more details - 

Stateful restart of Node Manager(YARN-1336) i introduced in Hadoop 2.6. This 
feature worked as expected in Hadoop 2.6 for us. Recently we upgraded our 
clusters from 2.6 to 2.9.2 along with some OS upgrades. This feature was broken 
after the upgrade. one of the initial suspicion was LinuxContainerExecutor as 
we started using it in this upgrade. 

yarn-site.xml has all required configurations to enable this feature - 

{{yarn.nodemanager.recovery.enabled: 'true'}}

{{yarn.nodemanager.recovery.dir:''}}

{{yarn.nodemanager.recovery.supervised: 'true'}}

{{yarn.nodemanager.address: '0.0.0.0:8041'}}

While containers running and NM restarted, below is the exception constantly 
observed in Node Manager logs - 
{quote}
java.io.IOException: *Timeout while waiting for exit code from 
container_e37_1583181000856_0008_01_43*2020-03-05 17:45:18,241 ERROR 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch:
 Unable to recover container container_e37_1583181000856_0008_01_43
{quote}
{quote}at 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.reacquireContainer(ContainerExecutor.java:274)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.reacquireContainer(LinuxContainerExecutor.java:631)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:84)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:47)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-03-05 17:45:18,241 ERROR 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch:
 Unable to recover container container_e37_1583181000856_0008_01_18
java.io.IOException: Timeout while waiting for exit code from 
container_e37_1583181000856_0008_01_18
at 
org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor.reacquireContainer(ContainerExecutor.java:274)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.reacquireContainer(LinuxContainerExecutor.java:631)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:84)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch.call(RecoveredContainerLaunch.java:47)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2020-03-05 17:45:18,242 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch:
 Recovered container exited with a non-zero exit code 154
{quote}
{quote}2020-03-05 17:45:18,243 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.RecoveredContainerLaunch:
 Recovered container exited with a non-zero exit code 154
{quote}
After some digging on what was causing exitfile missing, at OS level identified 
that running container processes are going down as soon as NM is going down. 
Process tree looks perfectly fine as the container-executor takes care of 
forking child process as expected.  Dig deeper into various parts of code to 
see if anything caused the failure. 

One question was did we break anything in our internal repo after we forked 
2.9.2 from open source. Started looking into code at different areas like NM 
shutdown hook and clean up process, NM State store on container launch, NM aux 
services, container-executor, Shell launch and clean up related hooks, etc. 
Things were looking fine as expected. 

It was identified that hadoop-nodemanager systemd process configured to use 
default KillMode which is control-group. 
[https://www.freedesktop.org/software/systemd/man/systemd.kill.html#KillMode=]

This is causing s

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-03-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/

[Mar 21, 2020 4:44:55 PM] (tasanuma) HDFS-15214. WebHDFS: Add snapshot counts 
to Content Summary. Contributed




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed CTEST tests :

   remote_block_reader 
   memcheck_remote_block_reader 
   bad_datanode 
   memcheck_bad_datanode 

Failed junit tests :

   hadoop.io.compress.snappy.TestSnappyCompressorDecompressor 
   hadoop.io.compress.TestCompressorDecompressor 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.TestMapreduceConfigFields 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-compile-cc-root.txt
  [32K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-compile-javac-root.txt
  [428K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/whitespace-eol.txt
  [13M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/whitespace-tabs.txt
  [1.9M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/xml.txt
  [20K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1446/artifact/out/branch-findbugs-hadoop-cloud-storage-project_hadoop-cos-warnings.html
  [12K]

   javadoc

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-03-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.mapreduce.v2.TestUberAM 
   hadoop.tools.TestDistCpSystem 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [240K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/632/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-b

Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-22 Thread Sammi Chen
 +1 binding.

- Verified hashes and signatures
- Built from source
- Verfied that the last commit of release binary matches the tag
- Start a ozone cluster with docker compose
- Run ozone sh and scmcli commands
- Run freon to create 100k keys with validation

Thanks Dinesh for driving the release.

Bests,
Sammi

On Mon, Mar 16, 2020 at 10:27 AM Dinesh Chitlangia 
wrote:

> Hi Folks,
>
> We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta.
>
> The RC artifacts are at:
> https://home.apache.org/~dineshc/ozone-0.5.0-rc2/
>
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1262
>
> The RC tag in git is at:
> https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2
>
> This release contains 800+ fixes/improvements [1].
> Thanks to everyone who put in the effort to make this happen.
>
> *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm PST.*
>
> Note: This release is beta quality, it’s not recommended to use in
> production but we believe that it’s stable enough to try out the feature
> set and collect feedback.
>
>
> [1] https://s.apache.org/ozone-0.5.0-fixed-issues
>
> Thanks,
> Dinesh Chitlangia
>