Re: VOTE: Hadoop Ozone 0.4.0-alpha RC2

2019-05-07 Thread Hanisha Koneru
Thanks Ajay for putting up the RC.

+1 non-binding.
Verified the following:
- Built from source 
- Deployed binary to a 3 node docker cluster and ran basic sanity checks
- Ran smoke tests

Thanks
Hanisha

> On Apr 29, 2019, at 9:04 PM, Ajay Kumar  wrote:
> 
> Hi All,
> 
> 
> 
> We have created the third release candidate (RC2) for Apache Hadoop Ozone 
> 0.4.0-alpha.
> 
> 
> 
> This release contains security payload for Ozone. Below are some important 
> features in it:
> 
> 
> 
>  *   Hadoop Delegation Tokens and Block Tokens supported for Ozone.
>  *   Transparent Data Encryption (TDE) Support - Allows data blocks to be 
> encrypted-at-rest.
>  *   Kerberos support for Ozone.
>  *   Certificate Infrastructure for Ozone  - Tokens use PKI instead of shared 
> secrets.
>  *   Datanode to Datanode communication secured via mutual TLS.
>  *   Ability secure ozone cluster that works with Yarn, Hive, and Spark.
>  *   Skaffold support to deploy Ozone clusters on K8s.
>  *   Support S3 Authentication Mechanisms like - S3 v4 Authentication 
> protocol.
>  *   S3 Gateway supports Multipart upload.
>  *   S3A file system is tested and supported.
>  *   Support for Tracing and Profiling for all Ozone components.
>  *   Audit Support - including Audit Parser tools.
>  *   Apache Ranger Support in Ozone.
>  *   Extensive failure testing for Ozone.
> 
> The RC artifacts are available at 
> https://home.apache.org/~ajay/ozone-0.4.0-alpha-rc2/
> 
> 
> 
> The RC tag in git is ozone-0.4.0-alpha-RC2 (git hash 
> 4ea602c1ee7b5e1a5560c6cbd096de4b140f776b)
> 
> 
> 
> Please try 
> out,
>  vote, or just give us feedback.
> 
> 
> 
> The vote will run for 5 days, ending on May 4, 2019, 04:00 UTC.
> 
> 
> 
> Thank you very much,
> 
> Ajay


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop Ozone 0.4.1-alpha

2019-10-11 Thread Hanisha Koneru
Thank you Nanda for putting up the RC.

+1 binding.

Verified the following:
  - Built from source
  - Deployed to 5 node cluster and ran smoke tests.
  - Ran sanity checks

Thanks
Hanisha

> On Oct 4, 2019, at 10:42 AM, Nanda kumar  wrote:
> 
> Hi Folks,
> 
> I have put together RC0 for Apache Hadoop Ozone 0.4.1-alpha.
> 
> The artifacts are at:
> https://home.apache.org/~nanda/ozone/release/0.4.1/RC0/
> 
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1238/
> 
> The RC tag in git is at:
> https://github.com/apache/hadoop/tree/ozone-0.4.1-alpha-RC0
> 
> And the public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> This release contains 363 fixes/improvements [1].
> Thanks to everyone who put in the effort to make this happen.
> 
> *The vote will run for 7 days, ending on October 11th at 11:59 pm IST.*
> Note: This release is alpha quality, it’s not recommended to use in
> production but we believe that it’s stable enough to try out the feature
> set and collect feedback.
> 
> 
> [1] https://s.apache.org/yfudc
> 
> Thanks,
> Team Ozone



Re: [VOTE] Apache Hadoop Ozone 0.5.0-beta RC2

2020-03-21 Thread Hanisha Koneru
Thank you Dinesh for putting up the RCs.

+1 binding.

Verified the following:
  - Built from source
  - Deployed to 5 node docker cluster and ran sanity tests.
  - Ran smoke tests

Thanks
Hanisha

> On Mar 15, 2020, at 7:27 PM, Dinesh Chitlangia  wrote:
> 
> Hi Folks,
> 
> We have put together RC2 for Apache Hadoop Ozone 0.5.0-beta.
> 
> The RC artifacts are at:
> https://home.apache.org/~dineshc/ozone-0.5.0-rc2/
> 
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> The maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1262
> 
> The RC tag in git is at:
> https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC2
> 
> This release contains 800+ fixes/improvements [1].
> Thanks to everyone who put in the effort to make this happen.
> 
> *The vote will run for 7 days, ending on March 22nd 2020 at 11:59 pm PST.*
> 
> Note: This release is beta quality, it’s not recommended to use in
> production but we believe that it’s stable enough to try out the feature
> set and collect feedback.
> 
> 
> [1] https://s.apache.org/ozone-0.5.0-fixed-issues
> 
> Thanks,
> Dinesh Chitlangia



Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC0

2020-08-24 Thread Hanisha Koneru
Hi Sammi,

Thanks for creating the RC. If you do spin up a new RC, could you please 
consider including the following 2 Jiras in the release:
HDDS-4068 
HDDS-4063

Thanks,
Hanisha


> On Aug 24, 2020, at 1:01 PM, Attila Doroszlai  wrote:
> 
> Hi Sammi,
> 
> Thanks for creating the RC.  I have found that there are some leftover
> references to version "0.6.0" in upgrade scripts and tests
> (HDDS-4139).  Created a pull request to fix it, please consider
> including it in the release.
> 
> thanks,
> Attila
> 
> On Mon, Aug 24, 2020 at 3:55 PM Elek, Marton  > wrote:
>> 
>> 
>> +1 (binding)
>> 
>> 
>> 
>> 
>> 1. verified signatures
>> 
>> 2. verified checksums
>> 
>> 3. verified the output of `ozone version` (includes the good git revision)
>> 
>> 4. verified that the source package matches the git tag
>> 
>> 5. verified source can be used to build Ozone without previous state
>> (docker run -v ... -it maven ... --> built from the source with zero
>> local maven cache during 30 minutes)
>> 
>> 6. Verified Ozone can be used from binary package (cd compose/ozone &&
>> test.sh --> all tests were passed)
>> 
>> 7. Verified documentation is included in SCM UI
>> 
>> 8. Deployed to Kubernetes and executed Teragen on Yarn [1]
>> 
>> 9. Deployed to Kubernetes and executed Spark (3.0) Word count (local
>> executor) [2]
>> 
>> 
>> I know about a few performance problems (like HDDS-4119) but I don't
>> think we should further block the release (they are not regressions just
>>  improvements). If we will have significant performance improvements
>> soon, we can release 1.0.1 within 1 month.
>> 
>> 
>> Thanks the great work Sammi!!
>> 
>> Marton
>> 
>> [1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone
>> [2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone
>> 
>> 
>> 
>> 
>> On 8/20/20 3:12 PM, Sammi Chen wrote:
>>> my +1(binding)
>>> 
>>>-  Verified ozone version of binary package
>>> 
>>>-  Verified ozone source package content with ozone-1.0.0-RC0 tag
>>> 
>>>-  Build ozone from source package
>>> 
>>>-  Upgrade an existing 1+3 cluster using RC0 binary package
>>> 
>>>-  Check Ozone UI, SCM UI, Datanode UI and Recon UI
>>> 
>>>-  Run TestDFSIO write/read with Hadoop 2.7.5
>>> 
>>>-  Verified basic o3fs operations, upload and download file
>>> 
>>>-  Create bucket using aws CLI,  upload and download 10G file through s3g
>>> 
>>> Thanks,
>>> Sammi
>>> 
>>> On Thu, Aug 20, 2020 at 8:55 PM Sammi Chen  wrote:
>>> 
 
 This Ozone 1.0.0 release includes 620 JIRAs,
 
 https://issues.apache.org/jira/issues/?jql=project+%3D+HDDS+AND+%28cf%5B12310320%5D+%3D+0.6.0+or+fixVersion+%3D+0.6.0%29
 
 Thanks everyone for putting in the effort and making this happen.
 
 You can find the RC0 artifacts are at:
 https://home.apache.org/~sammichen/ozone-1.0.0-rc0/
 
 Maven artifacts are staged at:
 https://repository.apache.org/content/repositories/orgapachehadoop-1277
 
 The public key used for signing the artifacts can be found at:
 https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
 
 The RC0 tag in github is at:
 https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC0
 
 *The vote will run for 7 days, ending on Aug 27th 2020 at 11:59 pm CST.*
 
 Thanks,
 Sammi Chen
 
 
 
>>> 
>> 
>> -
>> To unsubscribe, e-mail: ozone-dev-unsubscr...@hadoop.apache.org 
>> 
>> For additional commands, e-mail: ozone-dev-h...@hadoop.apache.org 
>> 
>> 
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org 
> 
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org 
> 


Re: [VOTE] Apache Hadoop Ozone 1.0.0 RC1

2020-08-31 Thread Hanisha Koneru
Thank you Sammi for putting up the RC.

+1 (binding)

 - Built from source tarball
 - Ran integration tests and sanity checks
 - Built a 5 node cluster with OM HA
   - Tested reads and writes
   - Tested OM restarts and failovers
   - Tested Ozone shell commands

Thanks
Hanisha


> On Aug 31, 2020, at 4:22 PM, Bharat Viswanadham  wrote:
> 
> +1 (binding)
> 
> *  Built from the source tarball.
> *  Verified the checksums and signatures.
> *  Verified basic Ozone file system(o3fs) and S3 operations via AWS S3 CLI
> on the OM HA un-secure cluster.
> *  Verified ozone shell commands via CLI on the OM HA un-secure cluster.
> *  Verified basic Ozone file system and S3 operations via AWS S3 CLI on the
> OM HA secure cluster.
> *  Verified ozone shell commands via CLI on the OM HA secure cluster.
> 
> Thanks, Sammi for driving the release.
> 
> Regards,
> Bharat
> 
> 
> On Mon, Aug 31, 2020 at 10:23 AM Xiaoyu Yao 
> wrote:
> 
>> +1 (binding)
>> 
>> * Verify the checksums and signatures.
>> * Verify basic Ozone file system and S3 operations via CLI in secure docker
>> compose environment
>> * Run MR examples and teragen/terasort with ozone secure enabled.
>> * Verify EN/CN document rendering with hugo serve
>> 
>> Thanks Sammi for driving the release.
>> 
>> Regards,
>> Xiaoyu
>> 
>> On Mon, Aug 31, 2020 at 8:55 AM Shashikant Banerjee
>>  wrote:
>> 
>>> +1(binding)
>>> 
>>> 1.Verified checksums
>>> 2.Verified signatures
>>> 3.Verified the output of `ozone version
>>> 4.Tried creating volume and bucket, write and read key, by Ozone shell
>>> 5.Verified basic Ozone Filesystem operations
>>> 
>>> Thank you very much Sammi for putting up the release together.
>>> 
>>> Thanks
>>> Shashi
>>> 
>>> On Mon, Aug 31, 2020 at 4:35 PM Elek, Marton  wrote:
>>> 
 +1 (binding)
 
 
 1. verified signatures
 
 2. verified checksums
 
 3. verified the output of `ozone version` (includes the good git
>>> revision)
 
 4. verified that the source package matches the git tag
 
 5. verified source can be used to build Ozone without previous state
 (docker run -v ... -it maven ... --> built from the source with zero
 local maven cache during 16 minutes --> did on a sever at this time)
 
 6. Verified Ozone can be used from binary package (cd compose/ozone &&
 test.sh --> all tests were passed)
 
 7. Verified documentation is included in SCM UI
 
 8. Deployed to Kubernetes and executed Teragen on Yarn [1]
 
 9. Deployed to Kubernetes and executed Spark (3.0) Word count (local
 executor) [2]
 
 10. Deployed to Kubernetes and executed Flink Word count [3]
 
 11. Deployed to Kubernetes and executed Nifi
 
 Thanks very much Sammi, to drive this release...
 Marton
 
 ps:  NiFi setup requires some more testing. Counters were not updated
>> on
 the UI and at some cases, I saw DirNotFound exceptions when I used
 master. But during the last test with -rc1 it worked well.
 
 [1]: https://github.com/elek/ozone-perf-env/tree/master/teragen-ozone
 
 [2]: https://github.com/elek/ozone-perf-env/tree/master/spark-ozone
 
 [3]: https://github.com/elek/ozone-perf-env/tree/master/flink-ozone
 
 
 On 8/25/20 4:01 PM, Sammi Chen wrote:
> RC1 artifacts are at:
> https://home.apache.org/~sammichen/ozone-1.0.0-rc1/
> 
> 
> Maven artifacts are staged at:
> 
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1278
> <
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1277
> 
> 
> The public key used for signing the artifacts can be found at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> 
> The RC1 tag in github is at:
> https://github.com/apache/hadoop-ozone/releases/tag/ozone-1.0.0-RC1
> >> 
> 
> Change log of RC1, add
> 1. HDDS-4063. Fix InstallSnapshot in OM HA
> 2. HDDS-4139. Update version number in upgrade tests.
> 3. HDDS-4144, Update version info in hadoop client dependency readme
> 
> *The vote will run for 7 days, ending on Aug 31th 2020 at 11:59 pm
>>> PST.*
> 
> Thanks,
> Sammi Chen
> 
 
 -
 To unsubscribe, e-mail: ozone-dev-unsubscr...@hadoop.apache.org
 For additional commands, e-mail: ozone-dev-h...@hadoop.apache.org
 
 
>>> 
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Moving Ozone to a separated Apache project

2020-09-28 Thread Hanisha Koneru
+1

Thanks,
Hanisha

> On Sep 27, 2020, at 11:48 PM, Akira Ajisaka  wrote:
> 
> +1
> 
> Thanks,
> Akira
> 
> On Fri, Sep 25, 2020 at 3:00 PM Elek, Marton  > wrote:
>> 
>> Hi all,
>> 
>> Thank you for all the feedback and requests,
>> 
>> As we discussed in the previous thread(s) [1], Ozone is proposed to be a
>> separated Apache Top Level Project (TLP)
>> 
>> The proposal with all the details, motivation and history is here:
>> 
>> https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Hadoop+subproject+to+Apache+TLP+proposal
>> 
>> This voting runs for 7 days and will be concluded at 2nd of October, 6AM
>> GMT.
>> 
>> Thanks,
>> Marton Elek
>> 
>> [1]:
>> https://lists.apache.org/thread.html/rc6c79463330b3e993e24a564c6817aca1d290f186a1206c43ff0436a%40%3Chdfs-dev.hadoop.apache.org%3E
>> 
>> -
>> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org 
>> 
>> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org 
>> 
>> 
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org 
> 
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org 
> 


Re: [VOTE] Release Apache Hadoop 2.8.2 (RC1)

2017-10-20 Thread Hanisha Koneru
Hi Junping,

Thanks for preparing the 2.8.2-RC1 release.

Verified the following:
- Built from source on Mac OS X 10.11.6 with Java 1.7.0_79
- Deployed binary to a 3-node docker cluster
- Sanity checks
- Basic dfs operations
- MapReduce Wordcount & Grep


+1 (non-binding)



Thanks,
Hanisha








On 10/19/17, 5:42 PM, "Junping Du"  wrote:

>Hi folks,
> I've created our new release candidate (RC1) for Apache Hadoop 2.8.2.
>
> Apache Hadoop 2.8.2 is the first stable release of Hadoop 2.8 line and 
> will be the latest stable/production release for Apache Hadoop - it includes 
> 315 new fixed issues since 2.8.1 and 69 fixes are marked as blocker/critical 
> issues.
>
>  More information about the 2.8.2 release plan can be found here: 
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
>
>  New RC is available at: 
> http://home.apache.org/~junping_du/hadoop-2.8.2-RC1
>
>  The RC tag in git is: release-2.8.2-RC1, and the latest commit id is: 
> 66c47f2a01ad9637879e95f80c41f798373828fb
>
>  The maven artifacts are available via 
> repository.apache.org at: 
> https://repository.apache.org/content/repositories/orgapachehadoop-1064
>
>  Please try the release and vote; the vote will run for the usual 5 days, 
> ending on 10/24/2017 6pm PST time.
>
>Thanks,
>
>Junping
>


Access to Confluence Wiki

2017-10-27 Thread Hanisha Koneru
Hi,

Can I please get access to the Confluence Hadoop Wiki. My confluence id is 
“hanishakoneru”.

Thanks,
Hanisha



Re: Access to Confluence Wiki

2017-10-31 Thread Hanisha Koneru
Thank you, Akira.

-Hanisha







On 10/30/17, 11:01 PM, "Akira Ajisaka"  wrote:

>Done. Welcome!
>
>-Akira
>
>On 2017/10/28 3:26, Hanisha Koneru wrote:
>> Hi,
>> 
>> Can I please get access to the Confluence Hadoop Wiki. My confluence id is 
>> “hanishakoneru”.
>> 
>> Thanks,
>> Hanisha
>> 
>

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: VOTE] Release Apache Hadoop 2.7.5 (RC0)

2017-12-04 Thread Hanisha Koneru
Thanks Konstantin for putting up 2.7.5-RC0 release.

+1 (non-binding).

Verified the following:
- Built from source on Mac OS X 10.11.6 with Java 1.7.0_79
- Deployed binary to a 3-node docker cluster
- Sanity checks
- Basic dfs operations
- MapReduce Wordcount & Grep


Thanks,
Hanisha








On 12/1/17, 8:42 PM, "Konstantin Shvachko"  wrote:

>Hi everybody,
>
>This is the next dot release of Apache Hadoop 2.7 line. The previous one
>2.7.4 was release August 4, 2017.
>Release 2.7.5 includes critical bug fixes and optimizations. See more
>details in Release Note:
>http://home.apache.org/~shv/hadoop-2.7.5-RC0/releasenotes.html
>
>The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC0/
>
>Please give it a try and vote on this thread. The vote will run for 5 days
>ending 12/08/2017.
>
>My up to date public key is available from:
>https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
>Thanks,
>--Konstantin


Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Hanisha Koneru
Thanks Wangda for putting up the RC for 3.1.0.

+1 (binding).

Verified the following:
- Built from source
- Deployed binary to a 3-node docker cluster
- Sanity checks
- Basic dfs operations
- MapReduce Wordcount & Grep


Thanks,
Hanisha









On 4/3/18, 9:33 AM, "Arpit Agarwal"  wrote:

>Thanks Wangda, I see the shaded jars now.
>
>Are the repo jars required to be the same as the binary release? They don’t 
>match right now, probably they got rebuilt.
>
>+1 (binding), modulo that remaining question.
>
>* Verified signatures
>* Verified checksums for source and binary artefacts
>* Sanity checked jars on r.a.o. 
>* Built from source
>* Deployed to 3 node secure cluster with NameNode HA
>* Verified HDFS web UIs
>* Tried out HDFS shell commands
>* Ran sample MapReduce jobs
>
>Thanks!
>
>
>--
>From: Wangda Tan 
>Date: Monday, April 2, 2018 at 9:25 PM
>To: Arpit Agarwal 
>Cc: Gera Shegalov , Sunil G , 
>"yarn-...@hadoop.apache.org" , Hdfs-dev 
>, Hadoop Common , 
>"mapreduce-...@hadoop.apache.org" , Vinod 
>Kumar Vavilapalli 
>Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
>
>As pointed by Arpit, the previously deployed shared jars are incorrect. Just 
>redeployed jars and staged. @Arpit, could you please check the updated Maven 
>repo? https://repository.apache.org/content/repositories/orgapachehadoop-1092 
>
>Since the jars inside binary tarballs are correct 
>(http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't need 
>roll another RC, just update Maven repo should be sufficient. 
>
>Best,
>Wangda
>
>
>On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan  wrote:
>Hi Arpit, 
>
>Thanks for pointing out this.
>
>I just removed all .md5 files from artifacts. I found md5 checksums still 
>exist in .mds files and I didn't remove them from .mds file because it is 
>generated by create-release script and Apache guidance is "should not" instead 
>of "must not". Please let me know if you think they need to be removed as 
>well. 
>
>- Wangda
>
>
>
>On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal 
> wrote:
>Thanks for putting together this RC, Wangda. 
>
>The guidance from Apache is to omit MD5s, specifically:
>  > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
>
>https://www.apache.org/dev/release-distribution#sigs-and-sums
>
> 
>
>
>On Apr 2, 2018, at 7:03 AM, Wangda Tan  wrote:
>
>Hi Gera,
>
>It's my bad, I thought only src/bin tarball is enough.
>
>I just uploaded all other things under artifact/ to
>http://people.apache.org/~wangda/hadoop-3.1.0-RC1/
>
>Please let me know if you have any other comments.
>
>Thanks,
>Wangda
>
>
>On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov  wrote:
>
>
>Thanks, Wangda!
>
>There are many more artifacts in previous votes, e.g., see
>http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
>site tarball is missing.
>
>On Sun, Apr 1, 2018 at 11:54 PM Sunil G  wrote:
>
>
>Thanks Wangda for initiating the release.
>
>I tested this RC built from source file.
>
>
>  - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
>UI.
>  - Below feature sanity is done
> - Application priority
> - Application timeout
> - Intra Queue preemption with priority based
> - DS based affinity tests to verify placement constraints.
>  - Tested basic NodeLabel scenarios.
> - Added couple of labels to few of nodes and behavior is coming
> correct.
> - Verified old UI  and new YARN UI for labels.
> - Submitted apps to labelled cluster and it works fine.
> - Also performed few cli commands related to nodelabel.
>  - Test basic HA cases and seems correct.
>  - Tested new YARN UI . All pages are getting loaded correctly.
>
>
>- Sunil
>
>On Fri, Mar 30, 2018 at 9:45 AM Wangda Tan  wrote:
>
>
>Hi folks,
>
>Thanks to the many who helped with this release since Dec 2017 [1].
>We've
>
>created RC1 for Apache Hadoop 3.1.0. The artifacts are available here:
>
>http://people.apache.org/~wangda/hadoop-3.1.0-RC1
>
>The RC tag in git is release-3.1.0-RC1. Last git commit SHA is
>16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d
>
>The maven artifacts are available via http://repository.apache.org at
>https://repository.apache.org/content/repositories/
>orgapachehadoop-1090/
>
>This vote will run 5 days, ending on Apr 3 at 11:59 pm Pacific.
>
>3.1.0 contains 766 [2] fixed JIRA issues since 3.0.0. Notable additions
>include the first class GPU/FPGA support on YARN, Native services,
>Support
>
>rich placement constraints in YARN, S3-related enhancements, allow HDFS
>block replicas to be provided by an external storage system, etc.
>
>For 3.1.0 RC0 vote discussion, please see [3].
>
>We’d like to use this as a starting release for 3.1.x [1], depending on
>how
>
>it goes, get it stabilized and 

Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)

2018-04-03 Thread Hanisha Koneru
Correction: My vote is NON-BINDING. Sorry for the confusion.


Thanks,
Hanisha








On 4/3/18, 11:40 AM, "Hanisha Koneru"  wrote:

>Thanks Wangda for putting up the RC for 3.1.0.
>
>+1 (binding).
>
>Verified the following:
>- Built from source
>- Deployed binary to a 3-node docker cluster
>- Sanity checks
>   - Basic dfs operations
>   - MapReduce Wordcount & Grep
>
>
>Thanks,
>Hanisha
>
>
>
>
>
>
>
>
>
>On 4/3/18, 9:33 AM, "Arpit Agarwal"  wrote:
>
>>Thanks Wangda, I see the shaded jars now.
>>
>>Are the repo jars required to be the same as the binary release? They don’t 
>>match right now, probably they got rebuilt.
>>
>>+1 (binding), modulo that remaining question.
>>
>>* Verified signatures
>>* Verified checksums for source and binary artefacts
>>* Sanity checked jars on r.a.o. 
>>* Built from source
>>* Deployed to 3 node secure cluster with NameNode HA
>>* Verified HDFS web UIs
>>* Tried out HDFS shell commands
>>* Ran sample MapReduce jobs
>>
>>Thanks!
>>
>>
>>--
>>From: Wangda Tan 
>>Date: Monday, April 2, 2018 at 9:25 PM
>>To: Arpit Agarwal 
>>Cc: Gera Shegalov , Sunil G , 
>>"yarn-...@hadoop.apache.org" , Hdfs-dev 
>>, Hadoop Common , 
>>"mapreduce-...@hadoop.apache.org" , Vinod 
>>Kumar Vavilapalli 
>>Subject: Re: [VOTE] Release Apache Hadoop 3.1.0 (RC1)
>>
>>As pointed by Arpit, the previously deployed shared jars are incorrect. Just 
>>redeployed jars and staged. @Arpit, could you please check the updated Maven 
>>repo? https://repository.apache.org/content/repositories/orgapachehadoop-1092 
>>
>>Since the jars inside binary tarballs are correct 
>>(http://people.apache.org/~wangda/hadoop-3.1.0-RC1/). I think we don't need 
>>roll another RC, just update Maven repo should be sufficient. 
>>
>>Best,
>>Wangda
>>
>>
>>On Mon, Apr 2, 2018 at 2:39 PM, Wangda Tan <mailto:wheele...@gmail.com> wrote:
>>Hi Arpit, 
>>
>>Thanks for pointing out this.
>>
>>I just removed all .md5 files from artifacts. I found md5 checksums still 
>>exist in .mds files and I didn't remove them from .mds file because it is 
>>generated by create-release script and Apache guidance is "should not" 
>>instead of "must not". Please let me know if you think they need to be 
>>removed as well. 
>>
>>- Wangda
>>
>>
>>
>>On Mon, Apr 2, 2018 at 1:37 PM, Arpit Agarwal 
>><mailto:aagar...@hortonworks.com> wrote:
>>Thanks for putting together this RC, Wangda. 
>>
>>The guidance from Apache is to omit MD5s, specifically:
>>  > SHOULD NOT supply a MD5 checksum file (because MD5 is too broken).
>>
>>https://www.apache.org/dev/release-distribution#sigs-and-sums
>>
>> 
>>
>>
>>On Apr 2, 2018, at 7:03 AM, Wangda Tan <mailto:wheele...@gmail.com> wrote:
>>
>>Hi Gera,
>>
>>It's my bad, I thought only src/bin tarball is enough.
>>
>>I just uploaded all other things under artifact/ to
>>http://people.apache.org/~wangda/hadoop-3.1.0-RC1/
>>
>>Please let me know if you have any other comments.
>>
>>Thanks,
>>Wangda
>>
>>
>>On Mon, Apr 2, 2018 at 12:50 AM, Gera Shegalov <mailto:ger...@gmail.com> 
>>wrote:
>>
>>
>>Thanks, Wangda!
>>
>>There are many more artifacts in previous votes, e.g., see
>>http://home.apache.org/~junping_du/hadoop-2.8.3-RC0/ .  Among others the
>>site tarball is missing.
>>
>>On Sun, Apr 1, 2018 at 11:54 PM Sunil G <mailto:sun...@apache.org> wrote:
>>
>>
>>Thanks Wangda for initiating the release.
>>
>>I tested this RC built from source file.
>>
>>
>>  - Tested MR apps (sleep, wc) and verified both new YARN UI and old RM
>>UI.
>>  - Below feature sanity is done
>> - Application priority
>> - Application timeout
>> - Intra Queue preemption with priority based
>> - DS based affinity tests to verify placement constraints.
>>  - Tested basic NodeLabel scenarios.
>> - Added couple of labels to few of nodes and behavior is coming
>> correct.
>> - Verified old UI  and new YARN UI for labels.
>> - Submitted apps to labelled cluster and it works fine.
>> - Also performed few cli commands related to nodelabel.
>>  - Test basic HA cases and seems correct.
&g

Re: [VOTE] Merge ContainerIO branch (HDDS-48) in to trunk

2018-07-02 Thread Hanisha Koneru
+1

Thanks,
Hanisha









On 7/2/18, 9:24 AM, "Ajay Kumar"  wrote:

>+1 (non-binding)
>
>On 7/1/18, 11:21 PM, "Mukul Kumar Singh"  wrote:
>
>+1
>
>On 30/06/18, 11:33 AM, "Shashikant Banerjee"  
> wrote:
>
>+1(non-binding)
>
>Thanks
>Shashi
>
>On 6/30/18, 11:19 AM, "Nandakumar Vadivelu" 
>  wrote:
>
>+1
>
>On 6/30/18, 3:44 AM, "Bharat Viswanadham" 
>  wrote:
>
>Fixing subject line of the mail.
>
>
>Thanks,
>Bharat
>
>
>
>On 6/29/18, 3:10 PM, "Bharat Viswanadham" 
>  wrote:
>
>Hi All,
>
>Given the positive response to the discussion thread [1], 
> here is the formal vote thread to merge HDDS-48 in to trunk.
>
>Summary of code changes:
>1. Code changes for this branch are done in the 
> hadoop-hdds subproject and hadoop-ozone subproject, there is no impact to 
> hadoop-hdfs.
>2. Added support for multiple container types in the 
> datanode code path.
>3. Added disk layout logic for the containers to supports 
> future upgrades.
>4. Added support for volume Choosing policy to distribute 
> containers across disks on the datanode.
>5. Changed the format of the .container file to a 
> human-readable format (yaml)
>
>
> The vote will run for 7 days, ending Fri July 6th. I will 
> start this vote with my +1.
>
>Thanks,
>Bharat
>
>[1] 
> https://lists.apache.org/thread.html/79998ebd2c3837913a22097102efd8f41c3b08cb1799c3d3dea4876b@%3Chdfs-dev.hadoop.apache.org%3E
>
>
>
>
>
> -
>To unsubscribe, e-mail: 
> common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: 
> common-dev-h...@hadoop.apache.org
>
>
>
>
> -
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>
>-
>To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>
>
>
>-
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>
>-
>To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [VOTE] Release Apache Hadoop Ozone 0.2.1-alpha (RC0)

2018-09-25 Thread Hanisha Koneru
Thanks Marton for putting together the first RC for Ozone.

+1 (binding)

Verified the following:
  - Verified the signature
  - Built from Source
  - Deployed Pseudo HDFS and Ozone clusters and verified basic operations


Thanks,
Hanisha








On 9/19/18, 2:49 PM, "Elek, Marton"  wrote:

>Hi all,
>
>After the recent discussion about the first Ozone release I've created 
>the first release candidate (RC0) for Apache Hadoop Ozone 0.2.1-alpha.
>
>This release is alpha quality: it’s not recommended to use in production 
>but we believe that it’s stable enough to try it out the feature set and 
>collect feedback.
>
>The RC artifacts are available from: 
>https://home.apache.org/~elek/ozone-0.2.1-alpha-rc0/
>
>The RC tag in git is: ozone-0.2.1-alpha-RC0 (968082ffa5d)
>
>Please try the release and vote; the vote will run for the usual 5 
>working days, ending on September 26, 2018 10pm UTC time.
>
>The easiest way to try it out is:
>
>1. Download the binary artifact
>2. Read the docs at ./docs/index.html
>3. TLDR; cd compose/ozone && docker-compose up -d
>
>
>Please try it out, vote, or just give us feedback.
>
>Thank you very much,
>Marton
>
>ps: At next week, we will have a BoF session at ApacheCon North Europe, 
>Montreal on Monday evening. Please join, if you are interested, or need 
>support to try out the package or just have any feedback.
>
>
>-
>To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>


Re: [VOTE] Release Apache Hadoop Ozone 0.2.1-alpha (RC0)

2018-09-25 Thread Hanisha Koneru
Correction: My vote is NON-BINDING. Sorry for the confusion.

Thanks,
Hanisha









On 9/25/18, 7:29 PM, "Hanisha Koneru"  wrote:

>Thanks Marton for putting together the first RC for Ozone.
>
>+1 (binding)
>
>Verified the following:
>  - Verified the signature
>  - Built from Source
>  - Deployed Pseudo HDFS and Ozone clusters and verified basic operations
>
>
>Thanks,
>Hanisha
>
>
>
>
>
>
>
>
>On 9/19/18, 2:49 PM, "Elek, Marton"  wrote:
>
>>Hi all,
>>
>>After the recent discussion about the first Ozone release I've created 
>>the first release candidate (RC0) for Apache Hadoop Ozone 0.2.1-alpha.
>>
>>This release is alpha quality: it’s not recommended to use in production 
>>but we believe that it’s stable enough to try it out the feature set and 
>>collect feedback.
>>
>>The RC artifacts are available from: 
>>https://home.apache.org/~elek/ozone-0.2.1-alpha-rc0/
>>
>>The RC tag in git is: ozone-0.2.1-alpha-RC0 (968082ffa5d)
>>
>>Please try the release and vote; the vote will run for the usual 5 
>>working days, ending on September 26, 2018 10pm UTC time.
>>
>>The easiest way to try it out is:
>>
>>1. Download the binary artifact
>>2. Read the docs at ./docs/index.html
>>3. TLDR; cd compose/ozone && docker-compose up -d
>>
>>
>>Please try it out, vote, or just give us feedback.
>>
>>Thank you very much,
>>Marton
>>
>>ps: At next week, we will have a BoF session at ApacheCon North Europe, 
>>Montreal on Monday evening. Please join, if you are interested, or need 
>>support to try out the package or just have any feedback.
>>
>>
>>-
>>To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>>For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>>


Re: [VOTE] - HDDS-4 Branch merge

2019-01-11 Thread Hanisha Koneru
+1 (binding)

Thanks,
Hanisha









On 1/11/19, 7:40 AM, "Anu Engineer"  wrote:

>Since I have not heard any concerns, I will start a VOTE thread now.
>This vote will run for 7 days and will end on Jan/18/2019 @ 8:00 AM PST.
>
>I will start with my vote, +1 (Binding)
>
>Thanks
>Anu
>
>
>-- Forwarded message -
>From: Anu Engineer 
>Date: Mon, Jan 7, 2019 at 5:10 PM
>Subject: [Discuss] - HDDS-4 Branch merge
>To: , 
>
>
>Hi All,
>
>I would like to propose a merge of HDDS-4 branch to the Hadoop trunk.
>HDDS-4 branch implements the security work for HDDS and Ozone.
>
>HDDS-4 branch contains the following features:
>- Hadoop Kerberos and Tokens support
>- A Certificate infrastructure used by Ozone and HDDS.
>- Audit Logging and parsing support (Spread across trunk and HDDS-4)
>- S3 Security Support - AWS Signature Support.
>- Apache Ranger Support for Ozone
>
>I will follow up with a formal vote later this week if I hear no
>objections. AFAIK, the changes are isolated to HDDS/Ozone and should not
>impact any other Hadoop project.
>
>Thanks
>Anu

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [DISCUSS] Making submarine to different release model like Ozone

2019-02-01 Thread Hanisha Koneru
This is a great proposal. +1.

Thanks,
Hanisha









On 2/1/19, 11:04 AM, "Bharat Viswanadham"  wrote:

>Thank You Wangda for driving this discussion.
>+1 for a separate release for submarine.
>Having own release cadence will help iterate the project to grow at a faster 
>pace and also get the new features in hand to the users, and get their 
>feedback quickly.
>
>
>Thanks,
>Bharat
>
>
>
>
>On 2/1/19, 10:54 AM, "Ajay Kumar"  wrote:
>
>+1, Thanks for driving this. With rise of use cases running ML along with 
> traditional applications this will be of great help.
>
>Thanks,
>Ajay   
>
>On 2/1/19, 10:49 AM, "Suma Shivaprasad"  
> wrote:
>
>+1. Thanks for bringing this up Wangda.
>
>Makes sense to have Submarine follow its own release cadence given the 
> good
>momentum/adoption so far. Also, making it run with older versions of 
> Hadoop
>would drive higher adoption.
>
>Suma
>
>On Fri, Feb 1, 2019 at 9:40 AM Eric Yang  wrote:
>
>> Submarine is an application built for YARN framework, but it does 
> not have
>> strong dependency on YARN development.  For this kind of projects, 
> it would
>> be best to enter Apache Incubator cycles to create a new community.  
> Apache
>> commons is the only project other than Incubator that has independent
>> release cycles.  The collection is large, and the project goal is
>> ambitious.  No one really knows which component works with each 
> other in
>> Apache commons.  Hadoop is a much more focused project on distributed
>> computing framework and not incubation sandbox.  For alignment with 
> Hadoop
>> goals, and we want to prevent Hadoop project to be overloaded while
>> allowing good ideas to be carried forwarded in Apache incubator.  
> Put on my
>> Apache Member hat, my vote is -1 to allow more independent subproject
>> release cycle in Hadoop project that does not align with Hadoop 
> project
>> goals.
>>
>> Apache incubator process is highly recommended for Submarine:
>> https://incubator.apache.org/policy/process.html This allows 
> Submarine to
>> develop for older version of Hadoop like Spark works with multiple 
> versions
>> of Hadoop.
>>
>> Regards,
>> Eric
>>
>> On 1/31/19, 10:51 PM, "Weiwei Yang"  wrote:
>>
>> Thanks for proposing this Wangda, my +1 as well.
>> It is amazing to see the progress made in Submarine last year, 
> the
>> community grows fast and quiet collaborative. I can see the reasons 
> to get
>> it release faster in its own cycle. And at the same time, the Ozone 
> way
>> works very well.
>>
>> —
>> Weiwei
>> On Feb 1, 2019, 10:49 AM +0800, Xun Liu , 
> wrote:
>> > +1
>> >
>> > Hello everyone,
>> >
>> > I am Xun Liu, the head of the machine learning team at Netease
>> Research Institute. I quite agree with Wangda.
>> >
>> > Our team is very grateful for getting Submarine machine 
> learning
>> engine from the community.
>> > We are heavy users of Submarine.
>> > Because Submarine fits into the direction of our big data 
> team's
>> hadoop technology stack,
>> > It avoids the needs to increase the manpower investment in 
> learning
>> other container scheduling systems.
>> > The important thing is that we can use a common YARN cluster 
> to run
>> machine learning,
>> > which makes the utilization of server resources more 
> efficient, and
>> reserves a lot of human and material resources in our previous years.
>> >
>> > Our team have finished the test and deployment of the 
> Submarine and
>> will provide the service to our e-commerce department (
>> http://www.kaola.com/) shortly.
>> >
>> > We also plan to provides the Submarine engine in our existing 
> YARN
>> cluster in the next six months.
>> > Because we have a lot of product departments need to use 
> machine
>> learning services,
>> > for example:
>> > 1) Game department (http://game.163.com/) needs AI battle 
> training,
>> > 2) News department (http://www.163.com) needs news 
> recommendation,
>> > 3) Mailbox department (http://www.163.com) requires anti-spam 
> and
>> illegal detection,
>> > 4) Music department (https://music.163.com/) requires music
>> recommendation,
>> > 5) Education department (http://www.youdao.com) requires voice
>> recognition,
>> > 6) Massive Open Online Courses (https://open.163.com/

Re: [VOTE] Propose to start new Hadoop sub project "submarine"

2019-02-04 Thread Hanisha Koneru
+1 (non-binding).

Thanks,
Hanisha









On 2/4/19, 10:16 AM, "Weiwei Yang"  wrote:

>+1
>
>Weiwei
>
>--
>Weiwei
>On Feb 5, 2019, 2:11 AM +0800, Steve Loughran , wrote:
>> +1, binding
>>
>> > On 1 Feb 2019, at 22:15, Wangda Tan  wrote:
>> >
>> > Hi all,
>> >
>> > According to positive feedbacks from the thread [1]
>> >
>> > This is vote thread to start a new subproject named "hadoop-submarine"
>> > which follows the release process already established for ozone.
>> >
>> > The vote runs for usual 7 days, which ends at Feb 8th 5 PM PDT.
>> >
>> > Thanks,
>> > Wangda Tan
>> >
>> > [1]
>> > https://lists.apache.org/thread.html/f864461eb188bd12859d51b0098ec38942c4429aae7e4d001a633d96@%3Cyarn-dev.hadoop.apache.org%3E
>>
>>
>> -
>> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>>


Re: [VOTE] Release Apache Hadoop 3.1.2 - RC1

2019-02-04 Thread Hanisha Koneru
Thanks Sunil and Wangda for putting up the RC.

+1 (non-binding)
Verified the following:
- Built from source
- Deployed binary to a 3-node docker cluster
- Sanity checks - basic dfs operations



Thanks,
Hanisha








On 1/28/19, 10:19 PM, "Sunil G"  wrote:

>Hi Folks,
>
>On behalf of Wangda, we have an RC1 for Apache Hadoop 3.1.2.
>
>The artifacts are available here:
>http://home.apache.org/~sunilg/hadoop-3.1.2-RC1/
>
>The RC tag in git is release-3.1.2-RC1:
>https://github.com/apache/hadoop/commits/release-3.1.2-RC1
>
>The maven artifacts are available via repository.apache.org at
>https://repository.apache.org/content/repositories/orgapachehadoop-1215
>
>This vote will run 5 days from now.
>
>3.1.2 contains 325 [1] fixed JIRA issues since 3.1.1.
>
>We have done testing with a pseudo cluster and distributed shell job.
>
>My +1 to start.
>
>Best,
>Wangda Tan and Sunil Govindan
>
>[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.1.2)
>ORDER BY priority DESC


Re: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0

2017-07-06 Thread Hanisha Koneru
Thanks for the hard work Andrew!

- Built from source on Mac OS X 10.11.6 with Java 1.8.0_91
- Built from source on CentOS Linux 7.3.161, with Java 1.8.0_92, with and 
without native
- Deployed a 10 node cluster on docker containers
- Tested basic dfs operations
- Tested basic erasure coding (adding files, recovering corrupted files)
- Tested some dfsadmin operations : report, triggerblockreport

+1 (non-binding)



Thanks,
Hanisha








On 7/6/17, 3:57 PM, "Lei Xu"  wrote:

>+1 (binding)
>
>Ran the following tests:
>* Deploy a pesudo cluster using tar ball, run pi.
>* Verified MD5 of tar balls for both src and dist.
>* Build src tarball with -Pdist,tar
>
>Thanks Andrew for the efforts!
>
>On Thu, Jul 6, 2017 at 3:44 PM, Andrew Wang  wrote:
>> Thanks all for the votes so far!
>>
>> I think we're still at a single binding +1 from myself, so I'll leave this
>> vote open until we reach the minimum threshold of 3. I'm still hoping to
>> can push the release out before the weekend.
>>
>> On Thu, Jul 6, 2017 at 2:58 PM, Vijaya Krishna Kalluru Subbarao <
>> vij...@cloudera.com> wrote:
>>
>>> Ran Smokes and BVTs covering basic sanity testing(10+ tests ran) for all
>>> these components:
>>>
>>>- Mapreduce(compression, archives, pipes, JHS),
>>>- Avro(AvroMapreduce, HadoopAvro, HiveAvro, SqoopAvro),
>>>- HBase(Balancer, compression, ImportExport, Snapshots, Schema
>>>change),
>>>- Oozie(Hive, Pig, Spark),
>>>- Pig(PigAvro, PigParquet, PigCompression),
>>>- Search(SolrCtlBasic, SolrRequestForwading, SolrSSLConfiguration).
>>>
>>> +1 non-binding.
>>>
>>> Regards,
>>> Vijay
>>>
>>> On Thu, Jul 6, 2017 at 2:39 PM, Eric Badger >> > wrote:
>>>
 - Verified all checksums signatures
 - Built from src on macOS 10.12.5 with Java 1.8.0u65
 - Deployed single node pseudo cluster
 - Successfully ran sleep and pi jobs
 - Navigated the various UIs

 +1 (non-binding)

 Thanks,

 Eric

 On Thursday, July 6, 2017 3:31 PM, Aaron Fabbri 
 wrote:



 Thanks for the hard work on this!  +1 (non-binding)

 - Built from source tarball on OS X w/ Java 1.8.0_45.
 - Deployed mini/pseudo cluster.
 - Ran grep and wordcount examples.
 - Poked around ResourceManager and JobHistory UIs.
 - Ran all s3a integration tests in US West 2.



 On Thu, Jul 6, 2017 at 10:20 AM, Xiao Chen  wrote:

 > Thanks Andrew!
 > +1 (non-binding)
 >
 >- Verified md5's, checked tarball sizes are reasonable
 >- Built source tarball and deployed a pseudo-distributed cluster with
 >hdfs/kms
 >- Tested basic hdfs/kms operations
 >- Sanity checked webuis/logs
 >
 >
 > -Xiao
 >
 > On Wed, Jul 5, 2017 at 10:33 PM, John Zhuge 
 wrote:
 >
 > > +1 (non-binding)
 > >
 > >
 > >- Verified checksums and signatures of the tarballs
 > >- Built source with native, Java 1.8.0_131 on Mac OS X 10.12.5
 > >- Cloud connectors:
 > >   - A few S3A integration tests
 > >   - A few ADL live unit tests
 > >- Deployed both binary and built source to a pseudo cluster, passed
 > the
 > >following sanity tests in insecure, SSL, and SSL+Kerberos mode:
 > >   - HDFS basic and ACL
 > >   - DistCp basic
 > >   - WordCount (skipped in Kerberos mode)
 > >   - KMS and HttpFS basic
 > >
 > > Thanks Andrew for the great effort!
 > >
 > > On Wed, Jul 5, 2017 at 1:33 PM, Eric Payne >>> > > invalid>
 > > wrote:
 > >
 > > > Thanks Andrew.
 > > > I downloaded the source, built it, and installed it onto a pseudo
 > > > distributed 4-node cluster.
 > > >
 > > > I ran mapred and streaming test cases, including sleep and
 wordcount.
 > > > +1 (non-binding)
 > > > -Eric
 > > >
 > > >   From: Andrew Wang 
 > > >  To: "common-dev@hadoop.apache.org" ;
 "
 > > > hdfs-...@hadoop.apache.org" ; "
 > > > mapreduce-...@hadoop.apache.org" ;
 "
 > > > yarn-...@hadoop.apache.org" 
 > > >  Sent: Thursday, June 29, 2017 9:41 PM
 > > >  Subject: [VOTE] Release Apache Hadoop 3.0.0-alpha4-RC0
 > > >
 > > > Hi all,
 > > >
 > > > As always, thanks to the many, many contributors who helped with
 this
 > > > release! I've prepared an RC0 for 3.0.0-alpha4:
 > > >
 > > > http://home.apache.org/~wang/3.0.0-alpha4-RC0/
 > > >
 > > > The standard 5-day vote would run until midnight on Tuesday, July
 4th.
 > > > Given that July 4th is a holiday in the US, I expect this vote might
 > have
 > > > to be extended, but I'd like to close the vote relatively soon
 after.
 > > >
 > > > I've done my traditional testing of a pseudo-distributed cluster
 with a
 > > > single task pi job, which was successful.
 > > >
 > > > Normally my testing w

Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-07-31 Thread Hanisha Koneru
Hi Konstantin,

Thanks for preparing the 2.7.4-RC0 release.

- Built from source on Mac OS X 10.11.6 with Java 1.7.0_79
- Deployed both binary and built source to a pseudo cluster
- Passed the following sanity checks
- Basic dfs operations
- Wordcount


+1 (non-binding)


Thanks,
Hanisha








On 7/31/17, 1:56 PM, "John Zhuge"  wrote:

>Just confirmed that HADOOP-13707 does fix the NN servlet issuet in
>branch-2.7.
>
>On Mon, Jul 31, 2017 at 11:38 AM, Konstantin Shvachko 
>wrote:
>
>> Hi John,
>>
>> Thank you for checking and voting.
>> As far as I know test failures on 2.7.4 are intermittent. We have a jira
>> for that
>> https://issues.apache.org/jira/browse/HDFS-11985
>> but decided it should not block the release.
>> The "dr,who" thing is a configuration issue. This page may be helpful:
>> http://hadoop.apache.org/docs/stable/hadoop-hdfs-httpfs/ServerSetup.html
>>
>> Thanks,
>> --Konstantin
>>
>> On Sun, Jul 30, 2017 at 11:24 PM, John Zhuge  wrote:
>>
>>> Hi Konstantin,
>>>
>>> Thanks a lot for the effort to prepare the 2.7.4-RC0 release!
>>>
>>> +1 (non-binding)
>>>
>>>- Verified checksums and signatures of all tarballs
>>>- Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6
>>>- Verified cloud connectors:
>>>   - All S3A integration tests
>>>- Deployed both binary and built source to a pseudo cluster, passed
>>>the following sanity tests in insecure, SSL, and SSL+Kerberos mode:
>>>   - HDFS basic and ACL
>>>   - DistCp basic
>>>   - MapReduce wordcount (only failed in SSL+Kerberos mode for binary
>>>   tarball, probably unrelated)
>>>   - KMS and HttpFS basic
>>>   - Balancer start/stop
>>>
>>> Shall we worry this test failures? Likely fixed by
>>> https://issues.apache.org/jira/browse/HADOOP-13707.
>>>
>>>- Got “curl: (22) The requested URL returned error: 403 User dr.who
>>>is unauthorized to access this page.” when accessing NameNode web servlet
>>>/jmx, /conf, /logLevel, and /stacks. It passed in branch-2.8.
>>>
>>>
>>> On Sat, Jul 29, 2017 at 4:29 PM, Konstantin Shvachko <
>>> shv.had...@gmail.com> wrote:
>>>
 Hi everybody,

 Here is the next release of Apache Hadoop 2.7 line. The previous stable
 release 2.7.3 was available since 25 August, 2016.
 Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are
 critical bug fixes and major optimizations. See more details in Release
 Note:
 http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html

 The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/

 Please give it a try and vote on this thread. The vote will run for 5
 days
 ending 08/04/2017.

 Please note that my up to date public key are available from:
 https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
 Please don't forget to refresh the page if you've been there recently.
 There are other place on Apache sites, which may contain my outdated key.

 Thanks,
 --Konstantin

>>>
>>>
>>>
>>> --
>>> John
>>>
>>
>>
>
>
>-- 
>John

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


[jira] [Created] (HADOOP-16991) Remove RetryInvocation INFO logging from ozone CLI output

2020-04-16 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HADOOP-16991:
---

 Summary: Remove RetryInvocation INFO logging from ozone CLI output
 Key: HADOOP-16991
 URL: https://issues.apache.org/jira/browse/HADOOP-16991
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Nilotpal Nandi
Assignee: Hanisha Koneru


In OM HA failover proxy provider, RetryInvocationHandler logs error message 
when client tries contacting non-leader OM. This error message can be 
suppressed as the failover would happen to leader OM.

{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
 OM:om2 is not the leader. Suggested leader is OM:om3. at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
 at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
 at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while invoking 
$Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 failover 
attempts. Trying to failover immediately.
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy

2020-07-07 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HADOOP-17116:
---

 Summary: Skip Retry INFO logging on first failover from a proxy
 Key: HADOOP-17116
 URL: https://issues.apache.org/jira/browse/HADOOP-17116
 Project: Hadoop Common
  Issue Type: Task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


RetryInvocationHandler logs an INFO level message on every failover except the 
first. This used to be ideal before when there were only 2 proxies in the 
FailoverProxyProvider. But if there are more than 2 proxies (as is possible 
with 3 or more NNs in HA), then there could be more than one failover to find 
the currently active proxy.

To avoid creating noise in clients logs/ console, RetryInvocationHandler should 
skip logging once for each proxy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-14901:
---

 Summary: ReuseObjectMapper in Hadoop Common
 Key: HADOOP-14901
 URL: https://issues.apache.org/jira/browse/HADOOP-14901
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
Priority: Minor


It is recommended to reuse ObjectMapper, if possible, for better performance. 
We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
some places: they are straightforward and thread safe.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reopened HADOOP-14901:
-

Patch for branch-2

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Hanisha Koneru
>        Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15164) DataNode Replica Trash

2018-01-08 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-15164:
---

 Summary: DataNode Replica Trash
 Key: HADOOP-15164
 URL: https://issues.apache.org/jira/browse/HADOOP-15164
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
 Attachments: DataNode_Replica_Trash_Design_Doc.pdf

DataNode Replica Trash will allow administrators to recover from a recent 
delete request that resulted in catastrophic loss of user data. This is 
achieved by placing all invalidated blocks in a replica trash on the datanode 
before completely purging them from the system. The design doc is attached here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15164) DataNode Replica Trash

2018-01-08 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru resolved HADOOP-15164.
-
Resolution: Duplicate

> DataNode Replica Trash
> --
>
> Key: HADOOP-15164
> URL: https://issues.apache.org/jira/browse/HADOOP-15164
> Project: Hadoop Common
>  Issue Type: New Feature
>    Reporter: Hanisha Koneru
>        Assignee: Hanisha Koneru
> Attachments: DataNode_Replica_Trash_Design_Doc.pdf
>
>
> DataNode Replica Trash will allow administrators to recover from a recent 
> delete request that resulted in catastrophic loss of user data. This is 
> achieved by placing all invalidated blocks in a replica trash on the datanode 
> before completely purging them from the system. The design doc is attached 
> here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15690) Hadoop docs' current should point to the latest release

2018-08-22 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-15690:
---

 Summary: Hadoop docs' current should point to the latest release
 Key: HADOOP-15690
 URL: https://issues.apache.org/jira/browse/HADOOP-15690
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


In [http://hadoop.apache.org/docs/,] the current folder points to Hadoop 2.9.1.

It should point to the latest release - Hadoop 3.1.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13246) Support Mutable Short Gauge In Metrics2 lib

2016-06-07 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-13246:
---

 Summary: Support Mutable Short Gauge In Metrics2 lib
 Key: HADOOP-13246
 URL: https://issues.apache.org/jira/browse/HADOOP-13246
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 3.0.0-alpha1
Reporter: Hanisha Koneru


Currently, MutableGaugeInt and MutableGaugeLong are the supported types of 
MutableGauge. Add MutableGaugeShort to this list for keeping track of metrics 
for which the int range is too big.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14002) Document -DskipShade property in BUILDING.txt

2017-01-19 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-14002:
---

 Summary: Document -DskipShade property in BUILDING.txt
 Key: HADOOP-14002
 URL: https://issues.apache.org/jira/browse/HADOOP-14002
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
Priority: Minor
 Fix For: 3.0.0-alpha2


HADOOP-13999 added a maven profile to disable client jar shading. This property 
should be documented in BUILDING.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14503) MutableMetricsFactory should allow RollingAverages field to be added as a metric

2017-06-07 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-14503:
---

 Summary: MutableMetricsFactory should allow RollingAverages field 
to be added as a metric
 Key: HADOOP-14503
 URL: https://issues.apache.org/jira/browse/HADOOP-14503
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


RollingAverages metric extends on MutableRatesWithAggregation metric and 
maintains a group of rolling average metrics. This class should be allowed to 
register as a metric with the MetricSystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations

2017-08-03 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-14732:
---

 Summary: ProtobufRpcEngine should use Time.monotonicNow to measure 
durations
 Key: HADOOP-14732
 URL: https://issues.apache.org/jira/browse/HADOOP-14732
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org