Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Rakesh Radhakrishnan
Thanks Sammi for getting this out!

+1 (binding)

 * Verified signatures.
 * Built from source.
 * Deployed small non-HA un-secure cluster.
 * Verified basic Ozone file system.
 * Tried out a few basic Ozone shell commands - create, list, delete
 * Ran a few Freon benchmark tests.

Thanks,
Rakesh

On Tue, Sep 1, 2020 at 11:53 AM Jitendra Pandey
 wrote:

> +1 (binding)
>
> 1. Verified signatures
> 2. Built from source
> 3. deployed with docker
> 4. tested with basic s3 apis.
>
> On Tue, Aug 25, 2020 at 7:01 AM Sammi Chen  wrote:
>
> > RC1 artifacts are at:
> > https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> > 
> >
> > Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1278
> >  >
> >
> > The public key used for signing the artifacts can be found at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > The RC1 tag in github is at:
> > https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> > 
> >
> > Change log of RC1, add
> > 1. HDDS-4063. Fix InstallSnapshot in OM HA
> > 2. HDDS-4139. Update version number in upgrade tests.
> > 3. HDDS-4144, Update version info in hadoop client dependency readme
> >
> > *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm PST.*
> >
> > Thanks,
> > Sammi Chen
> >
>


Re: [VOTE] End of Life Hadoop 2.9

2020-08-31 Thread Masatake Iwasaki

+1

Thanks,
Masatake Iwasaki

On 2020/09/01 4:09, Wei-Chiu Chuang wrote:

Dear fellow Hadoop developers,

Given the overwhelming feedback from the discussion thread
https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
thread for the community to vote and start the 2.9 EOL process.

What this entails:

(1) an official announcement that no further regular Hadoop 2.9.x releases
will be made after 2.9.2 (which was GA on 11/19/2019)
(2) resolve JIRAs that specifically target 2.9.3 as won't fix.


This vote will run for 7 days and will conclude by September 7th, 12:00pm
pacific time.
Committers are eligible to cast binding votes. Non-committers are welcomed
to cast non-binding votes.

Here is my vote, +1



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Jitendra Pandey
+1 (binding)

1. Verified signatures
2. Built from source
3. deployed with docker
4. tested with basic s3 apis.

On Tue, Aug 25, 2020 at 7:01 AM Sammi Chen  wrote:

> RC1 artifacts are at:
> https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> 
>
> Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1278
> 
>
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The RC1 tag in github is at:
> https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> 
>
> Change log of RC1, add
> 1. HDDS-4063. Fix InstallSnapshot in OM HA
> 2. HDDS-4139. Update version number in upgrade tests.
> 3. HDDS-4144, Update version info in hadoop client dependency readme
>
> *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm PST.*
>
> Thanks,
> Sammi Chen
>


Re: [VOTE] End of Life Hadoop 2.9

2020-08-31 Thread Dinesh Chitlangia
+1

Thanks,
Dinesh

On Mon, Aug 31, 2020 at 3:09 PM Wei-Chiu Chuang  wrote:

> Dear fellow Hadoop developers,
>
> Given the overwhelming feedback from the discussion thread
> https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
> thread for the community to vote and start the 2.9 EOL process.
>
> What this entails:
>
> (1) an official announcement that no further regular Hadoop 2.9.x releases
> will be made after 2.9.2 (which was GA on 11/19/2019)
> (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
>
>
> This vote will run for 7 days and will conclude by September 7th, 12:00pm
> pacific time.
> Committers are eligible to cast binding votes. Non-committers are welcomed
> to cast non-binding votes.
>
> Here is my vote, +1
>


Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Dinesh Chitlangia
+1

1. verified signatures, checksums, git tag
2. verified the output of `ozone version`
3. built from source
4. executed sample using ozone compose recipes

Thanks Sammi for organizing the release.

-Dinesh


On Tue, Aug 25, 2020 at 10:01 AM Sammi Chen  wrote:

> RC1 artifacts are at:
> https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> 
>
> Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1278
> 
>
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The RC1 tag in github is at:
> https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> 
>
> Change log of RC1, add
> 1. HDDS-4063. Fix InstallSnapshot in OM HA
> 2. HDDS-4139. Update version number in upgrade tests.
> 3. HDDS-4144, Update version info in hadoop client dependency readme
>
> *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm PST.*
>
> Thanks,
> Sammi Chen
>


Re: [VOTE] End of Life Hadoop 2.9(Internet mail)

2020-08-31 Thread 毛宝龙
+1(none binding)
Thanks
baoloong


发自我的iPhone


-- Original --
From: weichiu 
Date: Tue,Sep 1,2020 3:09 AM
To: Hdfs-dev , Hadoop Common 
, yarn-dev , 
ozone-...@hadoop.apache.org 
Subject: Re: [VOTE] End of Life Hadoop 2.9(Internet mail)

Dear fellow Hadoop developers,

Given the overwhelming feedback from the discussion thread
https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
thread for the community to vote and start the 2.9 EOL process.

What this entails:

(1) an official announcement that no further regular Hadoop 2.9.x releases
will be made after 2.9.2 (which was GA on 11/19/2019)
(2) resolve JIRAs that specifically target 2.9.3 as won't fix.


This vote will run for 7 days and will conclude by September 7th, 12:00pm
pacific time.
Committers are eligible to cast binding votes. Non-committers are welcomed
to cast non-binding votes.

Here is my vote, +1


Re: [VOTE] End of Life Hadoop 2.9

2020-08-31 Thread Sammi Chen
+1

Sammi

On Tue, Sep 1, 2020 at 3:09 AM Wei-Chiu Chuang  wrote:

> Dear fellow Hadoop developers,
>
> Given the overwhelming feedback from the discussion thread
> https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
> thread for the community to vote and start the 2.9 EOL process.
>
> What this entails:
>
> (1) an official announcement that no further regular Hadoop 2.9.x releases
> will be made after 2.9.2 (which was GA on 11/19/2019)
> (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
>
>
> This vote will run for 7 days and will conclude by September 7th, 12:00pm
> pacific time.
> Committers are eligible to cast binding votes. Non-committers are welcomed
> to cast non-binding votes.
>
> Here is my vote, +1
>


Re: [VOTE] End of Life Hadoop 2.9

2020-08-31 Thread Xiaoqiao He
+1

On Tue, Sep 1, 2020 at 10:14 AM Ayush Saxena  wrote:

> +1
>
> -Ayush
>
> > On 01-Sep-2020, at 7:23 AM, Akira Ajisaka  wrote:
> >
> > +1
> >
> > -Akira
> >
> >> On Tue, Sep 1, 2020 at 4:09 AM Wei-Chiu Chuang 
> wrote:
> >>
> >> Dear fellow Hadoop developers,
> >>
> >> Given the overwhelming feedback from the discussion thread
> >> https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
> >> thread for the community to vote and start the 2.9 EOL process.
> >>
> >> What this entails:
> >>
> >> (1) an official announcement that no further regular Hadoop 2.9.x
> releases
> >> will be made after 2.9.2 (which was GA on 11/19/2019)
> >> (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
> >>
> >>
> >> This vote will run for 7 days and will conclude by September 7th,
> 12:00pm
> >> pacific time.
> >> Committers are eligible to cast binding votes. Non-committers are
> welcomed
> >> to cast non-binding votes.
> >>
> >> Here is my vote, +1
> >>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] End of Life Hadoop 2.9

2020-08-31 Thread Ayush Saxena
+1

-Ayush

> On 01-Sep-2020, at 7:23 AM, Akira Ajisaka  wrote:
> 
> +1
> 
> -Akira
> 
>> On Tue, Sep 1, 2020 at 4:09 AM Wei-Chiu Chuang  wrote:
>> 
>> Dear fellow Hadoop developers,
>> 
>> Given the overwhelming feedback from the discussion thread
>> https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
>> thread for the community to vote and start the 2.9 EOL process.
>> 
>> What this entails:
>> 
>> (1) an official announcement that no further regular Hadoop 2.9.x releases
>> will be made after 2.9.2 (which was GA on 11/19/2019)
>> (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
>> 
>> 
>> This vote will run for 7 days and will conclude by September 7th, 12:00pm
>> pacific time.
>> Committers are eligible to cast binding votes. Non-committers are welcomed
>> to cast non-binding votes.
>> 
>> Here is my vote, +1
>> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] End of Life Hadoop 2.9

2020-08-31 Thread Akira Ajisaka
+1

-Akira

On Tue, Sep 1, 2020 at 4:09 AM Wei-Chiu Chuang  wrote:

> Dear fellow Hadoop developers,
>
> Given the overwhelming feedback from the discussion thread
> https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
> thread for the community to vote and start the 2.9 EOL process.
>
> What this entails:
>
> (1) an official announcement that no further regular Hadoop 2.9.x releases
> will be made after 2.9.2 (which was GA on 11/19/2019)
> (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
>
>
> This vote will run for 7 days and will conclude by September 7th, 12:00pm
> pacific time.
> Committers are eligible to cast binding votes. Non-committers are welcomed
> to cast non-binding votes.
>
> Here is my vote, +1
>


Re: [DISCUSS] Hadoop 2.10.1 release

2020-08-31 Thread Mingliang Liu
I can see how I can help, but I can not take the RM role this time.

Thanks,

On Mon, Aug 31, 2020 at 12:15 PM Wei-Chiu Chuang
 wrote:

> Hello,
>
> I see that Masatake graciously agreed to volunteer with the Hadoop 2.10.1
> release work in the 2.9 branch EOL discussion thread
> https://s.apache.org/hadoop2.9eold
>
> Anyone else likes to contribute also?
>
> Thanks
>


-- 
L


Re: [VOTE] End of Life Hadoop 2.9

2020-08-31 Thread Mingliang Liu
+1 (binding)

p.s. We should still maintain 2.10 for Hadoop 2 users and encourage them to
upgrade to Hadoop 3.

On Mon, Aug 31, 2020 at 12:10 PM Wei-Chiu Chuang  wrote:

> Dear fellow Hadoop developers,
>
> Given the overwhelming feedback from the discussion thread
> https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
> thread for the community to vote and start the 2.9 EOL process.
>
> What this entails:
>
> (1) an official announcement that no further regular Hadoop 2.9.x releases
> will be made after 2.9.2 (which was GA on 11/19/2019)
> (2) resolve JIRAs that specifically target 2.9.3 as won't fix.
>
>
> This vote will run for 7 days and will conclude by September 7th, 12:00pm
> pacific time.
> Committers are eligible to cast binding votes. Non-committers are welcomed
> to cast non-binding votes.
>
> Here is my vote, +1
>


-- 
L


Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Hanisha Koneru
Thank you Sammi for putting up the RC.

+1 (binding)

 - Built from source tarball
 - Ran integration tests and sanity checks
 - Built a 5 node cluster with OM HA
   - Tested reads and writes
   - Tested OM restarts and failovers
   - Tested Ozone shell commands

Thanks
Hanisha


> On Aug 31, 2020, at 4:22 PM, Bharat Viswanadham  wrote:
> 
> +1 (binding)
> 
> *  Built from the source tarball.
> *  Verified the checksums and signatures.
> *  Verified basic Ozone file system(o3fs) and S3 operations via AWS S3 CLI
> on the OM HA un-secure cluster.
> *  Verified ozone shell commands via CLI on the OM HA un-secure cluster.
> *  Verified basic Ozone file system and S3 operations via AWS S3 CLI on the
> OM HA secure cluster.
> *  Verified ozone shell commands via CLI on the OM HA secure cluster.
> 
> Thanks, Sammi for driving the release.
> 
> Regards,
> Bharat
> 
> 
> On Mon, Aug 31, 2020 at 10:23 AM Xiaoyu Yao 
> wrote:
> 
>> +1 (binding)
>> 
>> * Verify the checksums and signatures.
>> * Verify basic Ozone file system and S3 operations via CLI in secure docker
>> compose environment
>> * Run MR examples and teragen/terasort with ozone secure enabled.
>> * Verify EN/CN document rendering with hugo serve
>> 
>> Thanks Sammi for driving the release.
>> 
>> Regards,
>> Xiaoyu
>> 
>> On Mon, Aug 31, 2020 at 8:55 AM Shashikant Banerjee
>>  wrote:
>> 
>>> +1(binding)
>>> 
>>> 1.Verified checksums
>>> 2.Verified signatures
>>> 3.Verified the output of `ozone version
>>> 4.Tried creating volume and bucket, write and read key, by Ozone shell
>>> 5.Verified basic Ozone Filesystem operations
>>> 
>>> Thank you very much Sammi for putting up the release together.
>>> 
>>> Thanks
>>> Shashi
>>> 
>>> On Mon, Aug 31, 2020 at 4:35 PM Elek, Marton  wrote:
>>> 
 +1 (binding)
 
 
 1. verified signatures
 
 2. verified checksums
 
 3. verified the output of `ozone version` (includes the good git
>>> revision)
 
 4. verified that the source package matches the git tag
 
 5. verified source can be used to build Ozone without previous state
 (docker run -v ... -it maven ... --> built from the source with zero
 local maven cache during 16 minutes --> did on a sever at this time)
 
 6. Verified Ozone can be used from binary package (cd compose/ozone &&
 test.sh --> all tests were passed)
 
 7. Verified documentation is included in SCM UI
 
 8. Deployed to Kubernetes and executed Teragen on Yarn [1]
 
 9. Deployed to Kubernetes and executed Spark (3.0) Word count (local
 executor) [2]
 
 10. Deployed to Kubernetes and executed Flink Word count [3]
 
 11. Deployed to Kubernetes and executed Nifi
 
 Thanks very much Sammi, to drive this release...
 Marton
 
 ps:  NiFi setup requires some more testing. Counters were not updated
>> on
 the UI and at some cases, I saw DirNotFound exceptions when I used
 master. But during the last test with -rc1 it worked well.
 
 [1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone
 
 [2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone
 
 [3]: https://github.com/elek/ozone-perf-env/tree/master/flink-ozone
 
 
 On 8/25/20 4:01 PM, Sammi Chen wrote:
> RC1 artifacts are at:
> https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> 
> 
> Maven artifacts are staged at:
> 
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1278
> <
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1277
> 
> 
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> The RC1 tag in github is at:
> https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> >> 
> 
> Change log of RC1, add
> 1. HDDS-4063. Fix InstallSnapshot in OM HA
> 2. HDDS-4139. Update version number in upgrade tests.
> 3. HDDS-4144, Update version info in hadoop client dependency readme
> 
> *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm
>>> PST.*
> 
> Thanks,
> Sammi Chen
> 
 
 -
 To unsubscribe, e-mail: ozone-dev-unsubscr...@hadoop.apache.org
 For additional commands, e-mail: ozone-dev-h...@hadoop.apache.org
 
 
>>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Bharat Viswanadham
+1 (binding)

*  Built from the source tarball.
*  Verified the checksums and signatures.
*  Verified basic Ozone file system(o3fs) and S3 operations via AWS S3 CLI
on the OM HA un-secure cluster.
*  Verified ozone shell commands via CLI on the OM HA un-secure cluster.
*  Verified basic Ozone file system and S3 operations via AWS S3 CLI on the
OM HA secure cluster.
*  Verified ozone shell commands via CLI on the OM HA secure cluster.

Thanks, Sammi for driving the release.

Regards,
Bharat


On Mon, Aug 31, 2020 at 10:23 AM Xiaoyu Yao 
wrote:

> +1 (binding)
>
> * Verify the checksums and signatures.
> * Verify basic Ozone file system and S3 operations via CLI in secure docker
> compose environment
> * Run MR examples and teragen/terasort with ozone secure enabled.
> * Verify EN/CN document rendering with hugo serve
>
> Thanks Sammi for driving the release.
>
> Regards,
> Xiaoyu
>
> On Mon, Aug 31, 2020 at 8:55 AM Shashikant Banerjee
>  wrote:
>
> > +1(binding)
> >
> > 1.Verified checksums
> > 2.Verified signatures
> > 3.Verified the output of `ozone version
> > 4.Tried creating volume and bucket, write and read key, by Ozone shell
> > 5.Verified basic Ozone Filesystem operations
> >
> > Thank you very much Sammi for putting up the release together.
> >
> > Thanks
> > Shashi
> >
> > On Mon, Aug 31, 2020 at 4:35 PM Elek, Marton  wrote:
> >
> > > +1 (binding)
> > >
> > >
> > > 1. verified signatures
> > >
> > > 2. verified checksums
> > >
> > > 3. verified the output of `ozone version` (includes the good git
> > revision)
> > >
> > > 4. verified that the source package matches the git tag
> > >
> > > 5. verified source can be used to build Ozone without previous state
> > > (docker run -v ... -it maven ... --> built from the source with zero
> > > local maven cache during 16 minutes --> did on a sever at this time)
> > >
> > > 6. Verified Ozone can be used from binary package (cd compose/ozone &&
> > > test.sh --> all tests were passed)
> > >
> > > 7. Verified documentation is included in SCM UI
> > >
> > > 8. Deployed to Kubernetes and executed Teragen on Yarn [1]
> > >
> > > 9. Deployed to Kubernetes and executed Spark (3.0) Word count (local
> > > executor) [2]
> > >
> > > 10. Deployed to Kubernetes and executed Flink Word count [3]
> > >
> > > 11. Deployed to Kubernetes and executed Nifi
> > >
> > > Thanks very much Sammi, to drive this release...
> > > Marton
> > >
> > > ps:  NiFi setup requires some more testing. Counters were not updated
> on
> > > the UI and at some cases, I saw DirNotFound exceptions when I used
> > > master. But during the last test with -rc1 it worked well.
> > >
> > > [1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone
> > >
> > > [2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone
> > >
> > > [3]: https://github.com/elek/ozone-perf-env/tree/master/flink-ozone
> > >
> > >
> > > On 8/25/20 4:01 PM, Sammi Chen wrote:
> > > > RC1 artifacts are at:
> > > > https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> > > > 
> > > >
> > > > Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1278
> > > > <
> > https://repository.apache.org/content/repositories/orgapachehadoop-1277
> > > >
> > > >
> > > > The public key used for signing the artifacts can be found at:
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > > The RC1 tag in github is at:
> > > > https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> > > >  >
> > > >
> > > > Change log of RC1, add
> > > > 1. HDDS-4063. Fix InstallSnapshot in OM HA
> > > > 2. HDDS-4139. Update version number in upgrade tests.
> > > > 3. HDDS-4144, Update version info in hadoop client dependency readme
> > > >
> > > > *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm
> > PST.*
> > > >
> > > > Thanks,
> > > > Sammi Chen
> > > >
> > >
> > > -
> > > To unsubscribe, e-mail: ozone-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: ozone-dev-h...@hadoop.apache.org
> > >
> > >
> >
>


Re: [E] Re: A more inclusive elephant...

2020-08-31 Thread Eric Badger
https://issues.apache.org/jira/browse/HADOOP-17169

I don't really know who the people are to review this patch, but it removes
all non-inclusive terminology from Hadoop Common. This ends up changing
some things in some other projects (mostly HDFS) as well since they depend
on stuff from hadoop-common. I believe patch 003 is ready for review and
would appreciate some experts letting me know if anything is a bad change
code-wise.

Eric

On Thu, Jul 30, 2020 at 5:59 PM Vivek Ratnavel Subramanian
 wrote:

> Hi Eric and Carlo,
>
> Thanks for taking the initiative! I am willing to take this task up for
> improving the Ozone codebase.
>
> I have cloned the task and sub-tasks for Ozone -
>
> https://urldefense.com/v3/__https://issues.apache.org/jira/browse/HDDS-4050__;!!Op6eflyXZCqGR5I!VdShoY1ZPxJVYhyFFCSuGX4gU-2R6sHAr7G_HH0W5YjeJluizw7npVPF4ULP$
>
> - Vivek Subramanian
>
> On Thu, Jul 30, 2020 at 3:54 PM Eric Badger
>  wrote:
>
> > Thanks for the responses, Jon and Carlo!
> >
> > It makes sense to me to prevent future patches from re-introducing the
> > terminology. I can file a JIRA to add the +1/-1 functionality to the
> > precommit builds.
> >
> > As for splitting up the work, I think it'll probably be easiest and
> > cleanest to have an umbrella for each subproject of Hadoop (Hadoop, HDFS,
> > YARN, Mapreduce) with smaller tasks (e.g. whitelist/blacklist,
> > master/slave) as subtasks of each umbrella. That way each expert can
> chime
> > in on their relative land of expertise and the patches won't be
> gigantic. I
> > can then link the umbrella JIRAs together so everything can be found
> > easily. As Carlo pointed out, it's unclear whether fewer, but larger
> > patches is better or worse than more, smaller patches. But I think that
> at
> > least for the sake of manageability and getting this into Apache, smaller
> > patches is likely easier.
> >
> > Eric
> >
> > On Thu, Jul 30, 2020 at 5:50 PM Carlo Aldo Curino <
> carlo.cur...@gmail.com>
> > wrote:
> >
> > > Thanks again Eric for leading the charge. As for whether to chop it up
> or
> > > keep it in fewer patches, I think it primarily impact the conflict
> > surface
> > > with dev branches and other in-flight development. More patches are
> > likely
> > > creating more localized clashes (as in I clash with a smaller patch,
> > which
> > > might be less daunting, though potentially more of them to deal with).
> I
> > > don't have a strong preference, maybe chunking it into reasonable
> > packages,
> > > so that you can involve the right core group of committers to way in
> for
> > > each sub-area.
> > >
> > > Thanks,
> > > Carlo
> > >
> > >
> > >
> > > On Thu, Jul 30, 2020 at 1:20 PM Jonathan Eagles 
> > wrote:
> > >
> > > > Thanks, Eric. I like this proposal and I'm glad this work is getting
> > > > traction. A few thoughts on implementation.
> > > >
> > > > Once the fix is done, I think it will be necessary to ensure these
> > > > language restrictions are enforced at the patch level. This will
> +1/-1
> > > > patches that introduce terminology that violate our policy.
> > > >
> > > > As to splitting up the patches, it may be necessary to to split these
> > up
> > > > further in cases where feature experts need to weigh in on
> > compatibility
> > > > (usually with regards to persistence or wire compatibility). This can
> > be
> > > > done case-by-case basis.
> > > >
> > > > Regards,
> > > > jeagles
> > > >
> > > > On Thu, Jul 30, 2020 at 1:28 PM Eric Badger
> > > >  wrote:
> > > >
> > > >> I have created
> > >
> >
> https://urldefense.com/v3/__https://issues.apache.org/jira/browse/HADOOP-17168__;!!Op6eflyXZCqGR5I!XjCu5VSFdt2uqyuzlkc53KSBa6IM-M2Wun_FX6uD8fl99OAvaj9wb-0kz4fK$
> > > to
> > > >> remove
> > > >> non-inclusive terminology from Hadoop. However I would like input on
> > how
> > > >> to
> > > >> go about putting up patches. This umbrella JIRA is under Hadoop
> > Common,
> > > >> but
> > > >> there are sure to be instances in YARN, HDFS, and Mapreduce. Should
> I
> > > >> create an umbrella like this for each subproject? Or should I do all
> > > >> whitelist/blacklist fixes in a single JIRA that fixes them across
> all
> > > >> Hadoop subprojects?
> > > >>
> > > >> Thanks,
> > > >>
> > > >> Eric
> > > >>
> > > >> On Thu, Jul 30, 2020 at 8:47 AM Carlo Aldo Curino <
> > > carlo.cur...@gmail.com
> > > >> >
> > > >> wrote:
> > > >>
> > > >> > RE Mentorship: I think the Mentorship program is an interesting
> > idea.
> > > >> The
> > > >> > concerns with these efforts is always the follow-through. If you
> can
> > > >> find a
> > > >> > group of folks that are motivated and will work on this I think it
> > > >> could be
> > > >> > a great idea, especially if you focus on a diverse set of mentees,
> > and
> > > >> the
> > > >> > focus in on teaching not just code but a bit of the "apache way"
> of
> > > >> > interacting, and conducting yourself in open-source.
> > > >> >
> > > >> > RE Diversity and representation: Wei-Chiu I think you raise an
> > > imp

[DISCUSS] Hadoop 2.10.1 release

2020-08-31 Thread Wei-Chiu Chuang
Hello,

I see that Masatake graciously agreed to volunteer with the Hadoop 2.10.1
release work in the 2.9 branch EOL discussion thread
https://s.apache.org/hadoop2.9eold

Anyone else likes to contribute also?

Thanks


[VOTE] End of Life Hadoop 2.9

2020-08-31 Thread Wei-Chiu Chuang
Dear fellow Hadoop developers,

Given the overwhelming feedback from the discussion thread
https://s.apache.org/hadoop2.9eold, I'd like to start an official vote
thread for the community to vote and start the 2.9 EOL process.

What this entails:

(1) an official announcement that no further regular Hadoop 2.9.x releases
will be made after 2.9.2 (which was GA on 11/19/2019)
(2) resolve JIRAs that specifically target 2.9.3 as won't fix.


This vote will run for 7 days and will conclude by September 7th, 12:00pm
pacific time.
Committers are eligible to cast binding votes. Non-committers are welcomed
to cast non-binding votes.

Here is my vote, +1


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-08-31 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/

No changes




-1 overall


The following subsystems voted -1:
asflicense pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.TestReadStripedFileWithDNFailure 
   hadoop.hdfs.web.TestWebHDFSXAttr 
   hadoop.hdfs.TestGetFileChecksum 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier 
   hadoop.hdfs.web.TestWebHdfsFileSystemContract 
   hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestClusterMapReduceTestCase 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/whitespace-tabs.txt
  [1.9M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/diff-javadoc-javadoc-root.txt
  [1.3M]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [636K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/patch-unit-hadoop-tools_hadoop-fs2img.txt
  [8.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/251/artifact/out/patch-unit-hadoop-client-modules_hadoop-client-check-invariants.txt
  [0]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-ja

Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Xiaoyu Yao
+1 (binding)

* Verify the checksums and signatures.
* Verify basic Ozone file system and S3 operations via CLI in secure docker
compose environment
* Run MR examples and teragen/terasort with ozone secure enabled.
* Verify EN/CN document rendering with hugo serve

Thanks Sammi for driving the release.

Regards,
Xiaoyu

On Mon, Aug 31, 2020 at 8:55 AM Shashikant Banerjee
 wrote:

> +1(binding)
>
> 1.Verified checksums
> 2.Verified signatures
> 3.Verified the output of `ozone version
> 4.Tried creating volume and bucket, write and read key, by Ozone shell
> 5.Verified basic Ozone Filesystem operations
>
> Thank you very much Sammi for putting up the release together.
>
> Thanks
> Shashi
>
> On Mon, Aug 31, 2020 at 4:35 PM Elek, Marton  wrote:
>
> > +1 (binding)
> >
> >
> > 1. verified signatures
> >
> > 2. verified checksums
> >
> > 3. verified the output of `ozone version` (includes the good git
> revision)
> >
> > 4. verified that the source package matches the git tag
> >
> > 5. verified source can be used to build Ozone without previous state
> > (docker run -v ... -it maven ... --> built from the source with zero
> > local maven cache during 16 minutes --> did on a sever at this time)
> >
> > 6. Verified Ozone can be used from binary package (cd compose/ozone &&
> > test.sh --> all tests were passed)
> >
> > 7. Verified documentation is included in SCM UI
> >
> > 8. Deployed to Kubernetes and executed Teragen on Yarn [1]
> >
> > 9. Deployed to Kubernetes and executed Spark (3.0) Word count (local
> > executor) [2]
> >
> > 10. Deployed to Kubernetes and executed Flink Word count [3]
> >
> > 11. Deployed to Kubernetes and executed Nifi
> >
> > Thanks very much Sammi, to drive this release...
> > Marton
> >
> > ps:  NiFi setup requires some more testing. Counters were not updated on
> > the UI and at some cases, I saw DirNotFound exceptions when I used
> > master. But during the last test with -rc1 it worked well.
> >
> > [1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone
> >
> > [2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone
> >
> > [3]: https://github.com/elek/ozone-perf-env/tree/master/flink-ozone
> >
> >
> > On 8/25/20 4:01 PM, Sammi Chen wrote:
> > > RC1 artifacts are at:
> > > https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> > > 
> > >
> > > Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1278
> > > <
> https://repository.apache.org/content/repositories/orgapachehadoop-1277
> > >
> > >
> > > The public key used for signing the artifacts can be found at:
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > > The RC1 tag in github is at:
> > > https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> > > 
> > >
> > > Change log of RC1, add
> > > 1. HDDS-4063. Fix InstallSnapshot in OM HA
> > > 2. HDDS-4139. Update version number in upgrade tests.
> > > 3. HDDS-4144, Update version info in hadoop client dependency readme
> > >
> > > *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm
> PST.*
> > >
> > > Thanks,
> > > Sammi Chen
> > >
> >
> > -
> > To unsubscribe, e-mail: ozone-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: ozone-dev-h...@hadoop.apache.org
> >
> >
>


Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Shashikant Banerjee
+1(binding)

1.Verified checksums
2.Verified signatures
3.Verified the output of `ozone version
4.Tried creating volume and bucket, write and read key, by Ozone shell
5.Verified basic Ozone Filesystem operations

Thank you very much Sammi for putting up the release together.

Thanks
Shashi

On Mon, Aug 31, 2020 at 4:35 PM Elek, Marton  wrote:

> +1 (binding)
>
>
> 1. verified signatures
>
> 2. verified checksums
>
> 3. verified the output of `ozone version` (includes the good git revision)
>
> 4. verified that the source package matches the git tag
>
> 5. verified source can be used to build Ozone without previous state
> (docker run -v ... -it maven ... --> built from the source with zero
> local maven cache during 16 minutes --> did on a sever at this time)
>
> 6. Verified Ozone can be used from binary package (cd compose/ozone &&
> test.sh --> all tests were passed)
>
> 7. Verified documentation is included in SCM UI
>
> 8. Deployed to Kubernetes and executed Teragen on Yarn [1]
>
> 9. Deployed to Kubernetes and executed Spark (3.0) Word count (local
> executor) [2]
>
> 10. Deployed to Kubernetes and executed Flink Word count [3]
>
> 11. Deployed to Kubernetes and executed Nifi
>
> Thanks very much Sammi, to drive this release...
> Marton
>
> ps:  NiFi setup requires some more testing. Counters were not updated on
> the UI and at some cases, I saw DirNotFound exceptions when I used
> master. But during the last test with -rc1 it worked well.
>
> [1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone
>
> [2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone
>
> [3]: https://github.com/elek/ozone-perf-env/tree/master/flink-ozone
>
>
> On 8/25/20 4:01 PM, Sammi Chen wrote:
> > RC1 artifacts are at:
> > https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> > 
> >
> > Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1278
> >  >
> >
> > The public key used for signing the artifacts can be found at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > The RC1 tag in github is at:
> > https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> > 
> >
> > Change log of RC1, add
> > 1. HDDS-4063. Fix InstallSnapshot in OM HA
> > 2. HDDS-4139. Update version number in upgrade tests.
> > 3. HDDS-4144, Update version info in hadoop client dependency readme
> >
> > *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm PST.*
> >
> > Thanks,
> > Sammi Chen
> >
>
> -
> To unsubscribe, e-mail: ozone-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: ozone-dev-h...@hadoop.apache.org
>
>


Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-08-31 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/42/

[Aug 29, 2020 12:48:38 AM] (Konstantin Shvachko) HDFS-15290. NPE in HttpServer 
during NameNode startup. Contributed by Simbarashe Dzinamarira.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 504] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 504] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

findbugs :

   module:hadoop-yarn-project 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
State

Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Elek, Marton

+1 (binding)


1. verified signatures

2. verified checksums

3. verified the output of `ozone version` (includes the good git revision)

4. verified that the source package matches the git tag

5. verified source can be used to build Ozone without previous state 
(docker run -v ... -it maven ... --> built from the source with zero 
local maven cache during 16 minutes --> did on a sever at this time)


6. Verified Ozone can be used from binary package (cd compose/ozone &&
test.sh --> all tests were passed)

7. Verified documentation is included in SCM UI

8. Deployed to Kubernetes and executed Teragen on Yarn [1]

9. Deployed to Kubernetes and executed Spark (3.0) Word count (local 
executor) [2]


10. Deployed to Kubernetes and executed Flink Word count [3]

11. Deployed to Kubernetes and executed Nifi

Thanks very much Sammi, to drive this release...
Marton

ps:  NiFi setup requires some more testing. Counters were not updated on 
the UI and at some cases, I saw DirNotFound exceptions when I used 
master. But during the last test with -rc1 it worked well.


[1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone

[2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone

[3]: https://github.com/elek/ozone-perf-env/tree/master/flink-ozone


On 8/25/20 4:01 PM, Sammi Chen wrote:

RC1 artifacts are at:
https://home.apache.org/~sammichen/ozone-1.0.0-rc1/


Maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1278


The public key used for signing the artifacts can be found at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

The RC1 tag in github is at:
https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1


Change log of RC1, add
1. HDDS-4063. Fix InstallSnapshot in OM HA
2. HDDS-4139. Update version number in upgrade tests.
3. HDDS-4144, Update version info in hadoop client dependency readme

*The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm PST.*

Thanks,
Sammi Chen



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org