Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/449/

[Sep 18, 2019 5:31:33 PM] (ekrogen) HDFS-14569. Result of crypto -listZones is 
not formatted properly.
[Sep 18, 2019 9:51:21 PM] (kihwal) HDFS-13959. 
TestUpgradeDomainBlockPlacementPolicy is flaky. Contributed
[Sep 19, 2019 12:22:49 AM] (cliang) HDFS-14822. [SBN read] Revisit 
GlobalStateIdContext locking when getting

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-18 Thread 俊平堵
+1.

Thanks,

Junping

Elek, Marton  于2019年9月17日周二 下午5:48写道:

>
>
> TLDR; I propose to move Ozone related code out from Hadoop trunk and
> store it in a separated *Hadoop* git repository apache/hadoop-ozone.git
>
>
>
>
> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
> be part of the source tree but with separated release cadence, mainly
> because it had the hadoop-trunk/SNAPSHOT as compile time dependency.
>
> During the last Ozone releases this dependency is removed to provide
> more stable releases. Instead of using the latest trunk/SNAPSHOT build
> from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
>
> As we have no more strict dependency between Hadoop trunk SNAPSHOT and
> Ozone trunk I propose to separate the two code base from each other with
> creating a new Hadoop git repository (apache/hadoop-ozone.git):
>
> With moving Ozone to a separated git repository:
>
>   * It would be easier to contribute and understand the build (as of now
> we always need `-f pom.ozone.xml` as a Maven parameter)
>   * It would be possible to adjust build process without breaking
> Hadoop/Ozone builds.
>   * It would be possible to use different Readme/.asf.yaml/github
> template for the Hadoop Ozone and core Hadoop. (For example the current
> github template [2] has a link to the contribution guideline [3]. Ozone
> has an extended version [4] from this guideline with additional
> information.)
>   * Testing would be more safe as it won't be possible to change core
> Hadoop and Hadoop Ozone in the same patch.
>   * It would be easier to cut branches for Hadoop releases (based on the
> original consensus, Ozone should be removed from all the release
> branches after creating relase branches from trunk)
>
>
> What do you think?
>
> Thanks,
> Marton
>
> [1]:
>
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> [2]:
>
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> [3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> [4]:
>
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-18 Thread Dinesh Chitlangia
+1 (non-binding)

- built from the source on centos7
- verified signatures & sha512 checksums
- verified basic HDFS operations with CLI

As Steve already mentioned, we must mention about Guava.

Thanks for organizing the release Rohit.

Thanks,
Dinesh




On Wed, Sep 18, 2019 at 2:20 PM Steve Loughran 
wrote:

> On Wed, Sep 18, 2019 at 6:04 PM Rohith Sharma K S <
> rohithsharm...@apache.org>
> wrote:
>
> > Thanks Steve for detailed verification. Inline comment
> >
> > On Wed, 18 Sep 2019 at 20:34, Steve Loughran  >
> > wrote:
> >
> > > >
> > > > +1 binding.
> > > >
> > > > One caveat: warn people that guava is now at 27.0 -and that if you
> run
> > > > with an older version of Guava things will inevitably break.
> > >
> > >: Could you please suggest what is the process to follow now If I
> want
> > to add into release notes?  Should I withdraw RC0 and recreate RC1 with
> > updated Release note in corresponding JIRA so that release script will
> pick
> > up that? Or any other way?
> >
>
>
> no, just remember to mention it to people. Though they'll find out soon
> enough...
>


Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-18 Thread Steve Loughran
On Wed, Sep 18, 2019 at 6:04 PM Rohith Sharma K S 
wrote:

> Thanks Steve for detailed verification. Inline comment
>
> On Wed, 18 Sep 2019 at 20:34, Steve Loughran 
> wrote:
>
> > >
> > > +1 binding.
> > >
> > > One caveat: warn people that guava is now at 27.0 -and that if you run
> > > with an older version of Guava things will inevitably break.
> >
> >: Could you please suggest what is the process to follow now If I want
> to add into release notes?  Should I withdraw RC0 and recreate RC1 with
> updated Release note in corresponding JIRA so that release script will pick
> up that? Or any other way?
>


no, just remember to mention it to people. Though they'll find out soon
enough...


Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-18 Thread Elek, Marton



+1 (binding)

Thanks Rohith the work with the release.



 * built from the source (archlinux)
 * verified signatures
 * verified sha512 checksums
 * started a docker-based pseudo cluster
 * tested basic HDFS operations with CLI
 * Checked if the sources are uploaded to the maven staging repo

Note 1: I haven't seen the ./patchprocess/gpgagent.conf file in earlier 
releases and It seems to be included. But I don't think it's a blocker.


Note 2: *.sha512 files can be improved before uploading with removing 
the absolute path (to make it easier to check with sha512sum -c).



Marton


On 9/18/19 4:25 PM, Ayush Saxena wrote:

Thanks Rohith for driving the release.

+1 (non binding)

-Built from source on Ubuntu-18.04
-Successful Native build.
-Verified basic HDFS Commands.
-Verified basic Erasure Coding Commands.
-Verified basic RBF commands.
-Browsed HDFS UI.

Thanks

-Ayush

On Wed, 18 Sep 2019 at 15:41, Weiwei Yang  wrote:


+1 (binding)

Downloaded tarball, setup a pseudo cluster manually
Verified basic HDFS operations, copy/view files
Verified basic YARN operations, run sample DS jobs
Verified basic YARN restful APIs, e.g cluster/nodes info etc
Set and verified YARN node-attributes, including CLI

Thanks
Weiwei
On Sep 18, 2019, 11:41 AM +0800, zhankun tang ,
wrote:

+1 (non-binding).
Installed and verified it by running several Spark job and DS jobs.

BR,
Zhankun

On Wed, 18 Sep 2019 at 08:05, Naganarasimha Garla <
naganarasimha...@apache.org> wrote:


Verified the source and the binary tar and the sha512 checksums
Installed and verified the basic hadoop operations (ran few MR tasks)

+1.

Thanks,
+ Naga

On Wed, Sep 18, 2019 at 1:32 AM Anil Sadineni 
wrote:


+1 (non-binding)

On Tue, Sep 17, 2019 at 9:55 AM Santosh Marella 

wrote:



+1 (non-binding)

On Wed, Sep 11, 2019 at 12:26 AM Rohith Sharma K S <
rohithsharm...@apache.org> wrote:


Hi folks,

I have put together a release candidate (RC0) for Apache Hadoop

3.2.1.


The RC is available at:
http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/

The RC tag in git is release-3.2.1-RC0:
https://github.com/apache/hadoop/tree/release-3.2.1-RC0


The maven artifacts are staged at




https://repository.apache.org/content/repositories/orgapachehadoop-1226/


You can find my public key at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

This vote will run for 7 days(5 weekdays), ending on 18th Sept at

11:59

pm

PST.

I have done testing with a pseudo cluster and distributed shell

job.

My

+1

to start.

Thanks & Regards
Rohith Sharma K S






--
Thanks & Regards,
Anil Sadineni
Solutions Architect, Optlin Inc
Ph: 571-438-1974 | www.optlin.com









-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-18 Thread Masatake Iwasaki

+0

* built from source taball.
* launched 3-nodes cluster.
* ran some examplr MR jobs.
* ran some CLI file operations.

Same as 3.1.3 RC0, verbose INFO message is emitted on data transfer
operations.::

  $ bin/hadoop fs -put README.txt '/c;d/'
  2019-09-18 17:41:24,977 INFO sasl.SaslDataTransferClient: SASL 
encryption trust check: localHostTrusted = false, remoteHostTrusted = false


This was already fixed by HDFS-14759.
It was cherry-picked to branch-3.2 and branch-3.1 yesterday.

While this might not be worth for sinking RC0, I would not like to
give bad impression about quality control of patch release.

Thanks,
Masatake Iwasaki


On 9/11/19 16:26, Rohith Sharma K S wrote:

Hi folks,

I have put together a release candidate (RC0) for Apache Hadoop 3.2.1.

The RC is available at:
http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/

The RC tag in git is release-3.2.1-RC0:
https://github.com/apache/hadoop/tree/release-3.2.1-RC0


The maven artifacts are staged at
https://repository.apache.org/content/repositories/orgapachehadoop-1226/

You can find my public key at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

This vote will run for 7 days(5 weekdays), ending on 18th Sept at 11:59 pm
PST.

I have done testing with a pseudo cluster and distributed shell job. My +1
to start.

Thanks & Regards
Rohith Sharma K S




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16586) ITestS3GuardFsck fails when nun using a local metastore

2019-09-18 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16586:
---

 Summary: ITestS3GuardFsck fails when nun using a local metastore
 Key: HADOOP-16586
 URL: https://issues.apache.org/jira/browse/HADOOP-16586
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Siddharth Seth


Most of these tests fail if running against a local metastore with a 
ClassCastException.

Not sure if these tests are intended to work with dynamo only. The fix (either 
ignore in case of other metastores or fix the test) would depend on the 
original intent.

{code}
---
Test set: org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck
---
Tests run: 12, Failures: 0, Errors: 11, Skipped: 1, Time elapsed: 34.653 s <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck
testIDetectParentTombstoned(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 3.237 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectParentTombstoned(ITestS3GuardFsck.java:190)

testIDetectDirInS3FileInMs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 1.827 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectDirInS3FileInMs(ITestS3GuardFsck.java:214)

testIDetectLengthMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 2.819 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectLengthMismatch(ITestS3GuardFsck.java:311)

testIEtagMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  Time 
elapsed: 2.832 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIEtagMismatch(ITestS3GuardFsck.java:373)

testIDetectFileInS3DirInMs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 2.752 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectFileInS3DirInMs(ITestS3GuardFsck.java:238)

testIDetectModTimeMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 4.103 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectModTimeMismatch(ITestS3GuardFsck.java:346)

testIDetectNoMetadataEntry(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 3.017 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectNoMetadataEntry(ITestS3GuardFsck.java:113)

testIDetectNoParentEntry(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 2.821 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectNoParentEntry(ITestS3GuardFsck.java:136)

testINoEtag(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  Time elapsed: 
4.493 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testINoEtag(ITestS3GuardFsck.java:403)

testIDetectParentIsAFile(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
Time elapsed: 2.782 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
  at 
org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectParentIsAFile(ITestS3GuardFsck.java:163)

testTombstonedInMsNotDeletedInS3(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)
  Time elapsed: 3.008 s  <<< ERROR!
java.lang.ClassCastException: 
org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore 

[jira] [Resolved] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-18 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16547.
-
Resolution: Fixed

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-18 Thread Arpit Agarwal
+1


> On Sep 17, 2019, at 2:49 AM, Elek, Marton  wrote:
> 
> 
> 
> TLDR; I propose to move Ozone related code out from Hadoop trunk and store it 
> in a separated *Hadoop* git repository apache/hadoop-ozone.git
> 
> 
> 
> 
> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to be 
> part of the source tree but with separated release cadence, mainly because it 
> had the hadoop-trunk/SNAPSHOT as compile time dependency.
> 
> During the last Ozone releases this dependency is removed to provide more 
> stable releases. Instead of using the latest trunk/SNAPSHOT build from 
> Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
> 
> As we have no more strict dependency between Hadoop trunk SNAPSHOT and Ozone 
> trunk I propose to separate the two code base from each other with creating a 
> new Hadoop git repository (apache/hadoop-ozone.git):
> 
> With moving Ozone to a separated git repository:
> 
> * It would be easier to contribute and understand the build (as of now we 
> always need `-f pom.ozone.xml` as a Maven parameter)
> * It would be possible to adjust build process without breaking Hadoop/Ozone 
> builds.
> * It would be possible to use different Readme/.asf.yaml/github template for 
> the Hadoop Ozone and core Hadoop. (For example the current github template 
> [2] has a link to the contribution guideline [3]. Ozone has an extended 
> version [4] from this guideline with additional information.)
> * Testing would be more safe as it won't be possible to change core Hadoop 
> and Hadoop Ozone in the same patch.
> * It would be easier to cut branches for Hadoop releases (based on the 
> original consensus, Ozone should be removed from all the release branches 
> after creating relase branches from trunk)
> 
> 
> What do you think?
> 
> Thanks,
> Marton
> 
> [1]: 
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> [2]: 
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> [3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> [4]: 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-18 Thread Rohith Sharma K S
Thanks Steve for detailed verification. Inline comment

On Wed, 18 Sep 2019 at 20:34, Steve Loughran 
wrote:

> >
> > +1 binding.
> >
> > One caveat: warn people that guava is now at 27.0 -and that if you run
> > with an older version of Guava things will inevitably break.
>
>: Could you please suggest what is the process to follow now If I want
to add into release notes?  Should I withdraw RC0 and recreate RC1 with
updated Release note in corresponding JIRA so that release script will pick
up that? Or any other way?


Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-18 Thread Elek, Marton

> one thing to consider here as you are giving up your ability to make
> changes in hadoop-* modules, including hadoop-common, and their
> dependencies, in sync with your own code. That goes for filesystem 
contract

> tests.
>
> are you happy with that?


Yes. I think we can live with it.

Fortunatelly the Hadoop parts which are used by Ozone (security + rpc) 
are stable enough, we didn't need bigger changes until now (small 
patches are already included in 3.1/3.2).


I think it's better to use released Hadoop bits in Ozone anyway, and 
worst (best?) case we can try to do more frequent patch releases from 
Hadoop (if required).



m.


On 9/18/19 12:06 PM, Steve Loughran wrote:

one thing to consider here as you are giving up your ability to make
changes in hadoop-* modules, including hadoop-common, and their
dependencies, in sync with your own code. That goes for filesystem contract
tests.

are you happy with that?

On Tue, Sep 17, 2019 at 10:48 AM Elek, Marton  wrote:




TLDR; I propose to move Ozone related code out from Hadoop trunk and
store it in a separated *Hadoop* git repository apache/hadoop-ozone.git




When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
be part of the source tree but with separated release cadence, mainly
because it had the hadoop-trunk/SNAPSHOT as compile time dependency.

During the last Ozone releases this dependency is removed to provide
more stable releases. Instead of using the latest trunk/SNAPSHOT build
from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).

As we have no more strict dependency between Hadoop trunk SNAPSHOT and
Ozone trunk I propose to separate the two code base from each other with
creating a new Hadoop git repository (apache/hadoop-ozone.git):

With moving Ozone to a separated git repository:

   * It would be easier to contribute and understand the build (as of now
we always need `-f pom.ozone.xml` as a Maven parameter)
   * It would be possible to adjust build process without breaking
Hadoop/Ozone builds.
   * It would be possible to use different Readme/.asf.yaml/github
template for the Hadoop Ozone and core Hadoop. (For example the current
github template [2] has a link to the contribution guideline [3]. Ozone
has an extended version [4] from this guideline with additional
information.)
   * Testing would be more safe as it won't be possible to change core
Hadoop and Hadoop Ozone in the same patch.
   * It would be easier to cut branches for Hadoop releases (based on the
original consensus, Ozone should be removed from all the release
branches after creating relase branches from trunk)


What do you think?

Thanks,
Marton

[1]:

https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
[2]:

https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
[3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
[4]:

https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org






-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-18 Thread Steve Loughran
>
> +1 binding.
>
> One caveat: warn people that guava is now at 27.0 -and that if you run
> with an older version of Guava things will inevitably break.
>
>
> steps to validate
> ==
>
> * downloaded src and binary artifacts
> * after import of KEYS and trusting Rohith's key, validate GPG signatures
> * test basic hadoop fs commands against s3a with s3guard and abfds
>
>
>
>
> Validating S3A connector
> 
>
> * grabbed the latest build of my cloudstore diagnostics JAR
> https://github.com/steveloughran/cloudstore/releases/tag/tag_2019-09-13
> * and set an env var to it:
>   set -gx CLOUDSTORE cloudstore/target/cloudstore-0.1-SNAPSHOT.jar
>
> bin/hadoop jar $CLOUDSTORE storediag  s3a://hwdev-steve-ireland-new
>
>
>
>   Diagnostics for filesystem s3a://hwdev-steve-ireland-new/
>   =
>
>   S3A FileSystem connector
>   ASF Filesystem Connector to Amazon S3 Storage and compatible stores
>
> https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html
>
>   Hadoop information
>   ==
>
> Hadoop 3.2.1
> Compiled by rohithsharmaks on 2019-09-10T15:56Z
> Compiled with protoc 2.5.0
> From source with checksum 776eaf9eee9c0ffc370bcbc1888737
>
>   Required Classes
>   
>
>   All these classes must be on the classpath
>
>   class: org.apache.hadoop.fs.s3a.S3AFileSystem
>
>  file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/hadoop-aws-3.2.1.jar
>   class: com.amazonaws.services.s3.AmazonS3
>
>  
> file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.375.jar
>   class: com.amazonaws.ClientConfiguration
>
>  
> file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.375.jar
>
>   Optional Classes
>   
>
>   These classes are needed in some versions of Hadoop.
>   And/or for optional features to work.
>
>   class: com.amazonaws.services.dynamodbv2.AmazonDynamoDB
>
>  
> file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.375.jar
>   class: com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient
>
>  
> file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.375.jar
>   class: com.fasterxml.jackson.annotation.JacksonAnnotation
>
>  
> file:/Users/stevel/hadoop-3.2.1/share/hadoop/common/lib/jackson-annotations-2.9.8.jar
>   class: com.fasterxml.jackson.core.JsonParseException
>
>  
> file:/Users/stevel/hadoop-3.2.1/share/hadoop/common/lib/jackson-core-2.9.8.jar
>   class: com.fasterxml.jackson.databind.ObjectMapper
>
>  
> file:/Users/stevel/hadoop-3.2.1/share/hadoop/common/lib/jackson-databind-2.9.8.jar
>   class: org.joda.time.Interval
>  Not found on classpath: org.joda.time.Interval
>   class: org.apache.hadoop.fs.s3a.s3guard.S3Guard
>
>  file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/hadoop-aws-3.2.1.jar
>   class: org.apache.hadoop.fs.s3a.commit.staging.StagingCommitter
>
>  file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/hadoop-aws-3.2.1.jar
>   class: org.apache.hadoop.fs.s3a.commit.magic.MagicS3GuardCommitter
>
>  file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/hadoop-aws-3.2.1.jar
>   class: org.apache.hadoop.fs.s3a.Invoker
>
>  file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/hadoop-aws-3.2.1.jar
>   class: org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider
>
>  file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/hadoop-aws-3.2.1.jar
>
>  then some classes which aren't in 3.2 and so which I didn't expect to
> see.
>
>   class: org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens
>  Not found on classpath:
> org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens
>   class: com.amazonaws.services.s3.model.SelectObjectContentRequest
>
>  
> file:/Users/stevel/hadoop-3.2.1/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.375.jar
>   class: org.apache.hadoop.fs.s3a.select.SelectInputStream
>  Not found on classpath:
> org.apache.hadoop.fs.s3a.select.SelectInputStream
>   class: org.apache.hadoop.fs.s3a.impl.RenameOperation
>  Not found on classpath:
> org.apache.hadoop.fs.s3a.impl.RenameOperation
>
>
>
> + the command then executed basic list/read/write operations; all good.
>
>
>
> Validating abfs connector
> =
>
> * set -gx HADOOP_OPTIONAL_TOOLS hadoop-azure
>
>
>
>
> Diagnostics for filesystem abfs://contai...@someone.dfs.core.windows.net/
>
> 
>
> Azure Abfs connector
> ASF Filesystem Connector to Microsoft Azure ABFS Storage
> https://hadoop.apache.org/docs/current/hadoop-azure/index.html
>
> Hadoop information
> ==
>
>   Hadoop 3.2.1
>   Compiled by rohithsharmaks on 2019-09-10T15:56Z
>   Compiled with protoc 2.5.0
>   From source with checksum 776eaf9eee9c0ffc370bcbc1888737
>
> Environment Variables
> =
>

[jira] [Created] (HADOOP-16585) [Tool:NNloadGeneratorMR] Multiple threads are using same id for creating file LoadGenerator#write

2019-09-18 Thread Ranith Sardar (Jira)
Ranith Sardar created HADOOP-16585:
--

 Summary:  [Tool:NNloadGeneratorMR] Multiple threads are using same 
id for creating file LoadGenerator#write 
 Key: HADOOP-16585
 URL: https://issues.apache.org/jira/browse/HADOOP-16585
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ranith Sardar
Assignee: Ranith Sardar






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-18 Thread Ayush Saxena
Thanks Rohith for driving the release.

+1 (non binding)

-Built from source on Ubuntu-18.04
-Successful Native build.
-Verified basic HDFS Commands.
-Verified basic Erasure Coding Commands.
-Verified basic RBF commands.
-Browsed HDFS UI.

Thanks

-Ayush

On Wed, 18 Sep 2019 at 15:41, Weiwei Yang  wrote:

> +1 (binding)
>
> Downloaded tarball, setup a pseudo cluster manually
> Verified basic HDFS operations, copy/view files
> Verified basic YARN operations, run sample DS jobs
> Verified basic YARN restful APIs, e.g cluster/nodes info etc
> Set and verified YARN node-attributes, including CLI
>
> Thanks
> Weiwei
> On Sep 18, 2019, 11:41 AM +0800, zhankun tang ,
> wrote:
> > +1 (non-binding).
> > Installed and verified it by running several Spark job and DS jobs.
> >
> > BR,
> > Zhankun
> >
> > On Wed, 18 Sep 2019 at 08:05, Naganarasimha Garla <
> > naganarasimha...@apache.org> wrote:
> >
> > > Verified the source and the binary tar and the sha512 checksums
> > > Installed and verified the basic hadoop operations (ran few MR tasks)
> > >
> > > +1.
> > >
> > > Thanks,
> > > + Naga
> > >
> > > On Wed, Sep 18, 2019 at 1:32 AM Anil Sadineni 
> > > wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > On Tue, Sep 17, 2019 at 9:55 AM Santosh Marella 
> > > wrote:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > On Wed, Sep 11, 2019 at 12:26 AM Rohith Sharma K S <
> > > > > rohithsharm...@apache.org> wrote:
> > > > >
> > > > > > Hi folks,
> > > > > >
> > > > > > I have put together a release candidate (RC0) for Apache Hadoop
> > > 3.2.1.
> > > > > >
> > > > > > The RC is available at:
> > > > > > http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/
> > > > > >
> > > > > > The RC tag in git is release-3.2.1-RC0:
> > > > > > https://github.com/apache/hadoop/tree/release-3.2.1-RC0
> > > > > >
> > > > > >
> > > > > > The maven artifacts are staged at
> > > > > >
> > > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1226/
> > > > > >
> > > > > > You can find my public key at:
> > > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > > > >
> > > > > > This vote will run for 7 days(5 weekdays), ending on 18th Sept at
> > > 11:59
> > > > > pm
> > > > > > PST.
> > > > > >
> > > > > > I have done testing with a pseudo cluster and distributed shell
> job.
> > > My
> > > > > +1
> > > > > > to start.
> > > > > >
> > > > > > Thanks & Regards
> > > > > > Rohith Sharma K S
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Thanks & Regards,
> > > > Anil Sadineni
> > > > Solutions Architect, Optlin Inc
> > > > Ph: 571-438-1974 | www.optlin.com
> > > >
> > >
>


Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-18 Thread Weiwei Yang
+1 (binding)

Downloaded tarball, setup a pseudo cluster manually
Verified basic HDFS operations, copy/view files
Verified basic YARN operations, run sample DS jobs
Verified basic YARN restful APIs, e.g cluster/nodes info etc
Set and verified YARN node-attributes, including CLI

Thanks
Weiwei
On Sep 18, 2019, 11:41 AM +0800, zhankun tang , wrote:
> +1 (non-binding).
> Installed and verified it by running several Spark job and DS jobs.
>
> BR,
> Zhankun
>
> On Wed, 18 Sep 2019 at 08:05, Naganarasimha Garla <
> naganarasimha...@apache.org> wrote:
>
> > Verified the source and the binary tar and the sha512 checksums
> > Installed and verified the basic hadoop operations (ran few MR tasks)
> >
> > +1.
> >
> > Thanks,
> > + Naga
> >
> > On Wed, Sep 18, 2019 at 1:32 AM Anil Sadineni 
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > On Tue, Sep 17, 2019 at 9:55 AM Santosh Marella 
> > wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > On Wed, Sep 11, 2019 at 12:26 AM Rohith Sharma K S <
> > > > rohithsharm...@apache.org> wrote:
> > > >
> > > > > Hi folks,
> > > > >
> > > > > I have put together a release candidate (RC0) for Apache Hadoop
> > 3.2.1.
> > > > >
> > > > > The RC is available at:
> > > > > http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/
> > > > >
> > > > > The RC tag in git is release-3.2.1-RC0:
> > > > > https://github.com/apache/hadoop/tree/release-3.2.1-RC0
> > > > >
> > > > >
> > > > > The maven artifacts are staged at
> > > > >
> > > https://repository.apache.org/content/repositories/orgapachehadoop-1226/
> > > > >
> > > > > You can find my public key at:
> > > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > > >
> > > > > This vote will run for 7 days(5 weekdays), ending on 18th Sept at
> > 11:59
> > > > pm
> > > > > PST.
> > > > >
> > > > > I have done testing with a pseudo cluster and distributed shell job.
> > My
> > > > +1
> > > > > to start.
> > > > >
> > > > > Thanks & Regards
> > > > > Rohith Sharma K S
> > > > >
> > > >
> > >
> > >
> > > --
> > > Thanks & Regards,
> > > Anil Sadineni
> > > Solutions Architect, Optlin Inc
> > > Ph: 571-438-1974 | www.optlin.com
> > >
> >


Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-18 Thread Steve Loughran
one thing to consider here as you are giving up your ability to make
changes in hadoop-* modules, including hadoop-common, and their
dependencies, in sync with your own code. That goes for filesystem contract
tests.

are you happy with that?

On Tue, Sep 17, 2019 at 10:48 AM Elek, Marton  wrote:

>
>
> TLDR; I propose to move Ozone related code out from Hadoop trunk and
> store it in a separated *Hadoop* git repository apache/hadoop-ozone.git
>
>
>
>
> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
> be part of the source tree but with separated release cadence, mainly
> because it had the hadoop-trunk/SNAPSHOT as compile time dependency.
>
> During the last Ozone releases this dependency is removed to provide
> more stable releases. Instead of using the latest trunk/SNAPSHOT build
> from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
>
> As we have no more strict dependency between Hadoop trunk SNAPSHOT and
> Ozone trunk I propose to separate the two code base from each other with
> creating a new Hadoop git repository (apache/hadoop-ozone.git):
>
> With moving Ozone to a separated git repository:
>
>   * It would be easier to contribute and understand the build (as of now
> we always need `-f pom.ozone.xml` as a Maven parameter)
>   * It would be possible to adjust build process without breaking
> Hadoop/Ozone builds.
>   * It would be possible to use different Readme/.asf.yaml/github
> template for the Hadoop Ozone and core Hadoop. (For example the current
> github template [2] has a link to the contribution guideline [3]. Ozone
> has an extended version [4] from this guideline with additional
> information.)
>   * Testing would be more safe as it won't be possible to change core
> Hadoop and Hadoop Ozone in the same patch.
>   * It would be easier to cut branches for Hadoop releases (based on the
> original consensus, Ozone should be removed from all the release
> branches after creating relase branches from trunk)
>
>
> What do you think?
>
> Thanks,
> Marton
>
> [1]:
>
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> [2]:
>
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> [3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> [4]:
>
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Hadoop-3.1.3-RC0

2019-09-18 Thread zhankun tang
Hi Masatake,

My bad. I read it wrong. Yeah. You're right. The HDFS-14759 can be
backported to both Branch-3.2 and 3.1.

Let's see if there's any other blocker/critical issue come up, otherwise,
personally I don't prefer to run another RC1 and vote for this minor change.

BR,
Zhankun

On Wed, 18 Sep 2019 at 15:02, Masatake Iwasaki 
wrote:

> Hi Zhankun,
>
>  > Can you please help to provide a branch-3.1 patch for HDFS-14759? Or
> we can
>  > move it to the next release of branch-3.1 since the noisy info is not a
>  > blocker issue to me. Does that make sense?
>
> I tried to cherry-picking the HDFS-14759 on my side.
> Since there was no conflict, I pushed the branch-3.1 to apache repo.
> Could you pull and check it?
>
> Masatake
>
> On 9/18/19 15:49, Zhankun Tang wrote:
> > Hi Masatake,
> >
> > Thanks for helping to verify!
> > I checked that branch-3.2 has the HDFS-14759 committed already.
> > Release-3.2.1-RC0 should have no such issue.
> >
> > For branch-3.1, cherry-pick the same commit has conflicts. I'm confirming
> > if we can fix it or there's a feasible plan to backport the whole
> > Hadoop-15226 (which targeting branch-3.2) to branch-3.1.
> >
> > Can you please help to provide a branch-3.1 patch for HDFS-14759? Or we
> can
> > move it to the next release of branch-3.1 since the noisy info is not a
> > blocker issue to me. Does that make sense?
> >
> >
> > BR,
> > Zhankun
> >
> > On Wed, 18 Sep 2019 at 14:23, Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp>
> > wrote:
> >
> >> Thanks for putting this up, Zhankun Tang.
> >>
> >> While I was testing the RC0 wich CLI,
> >> noisy INFO message was emitted on every data transfer operation.::
> >>
> >> 2019-09-17 16:00:42,942 INFO sasl.SaslDataTransferClient: SASL
> >> encryption trust check: localHostTrusted = false, remoteHostTrusted =
> false
> >>
> >> The issue was fixed by HDFS-14759.
> >> I think the fix should be backported if we cut RC1.
> >>
> >> Since the fix version of HDFS-14759 is 3.3.0, CR0 of 3.2.1 could have
> >> same issue.
> >>
> >> Regards,
> >> Masatake Iwasaki
> >>
> >> On 9/12/19 17:04, Zhankun Tang wrote:
> >>> Hi folks,
> >>>
> >>> Thanks to everyone's help on this release. Special thanks to Rohith,
> >>> Wei-Chiu, Akira, Sunil, Wangda!
> >>>
> >>> I have created a release candidate (RC0) for Apache Hadoop 3.1.3.
> >>>
> >>> The RC release artifacts are available at:
> >>> http://home.apache.org/~ztang/hadoop-3.1.3-RC0/
> >>>
> >>> The maven artifacts are staged at:
> >>>
> https://repository.apache.org/content/repositories/orgapachehadoop-1228/
> >>>
> >>> The RC tag in git is here:
> >>> https://github.com/apache/hadoop/tree/release-3.1.3-RC0
> >>>
> >>> And my public key is at:
> >>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >>>
> >>> *This vote will run for 7 days, ending on Sept.19th at 11:59 pm PST.*
> >>>
> >>> For the testing, I have run several Spark and distributed shell jobs in
> >> my
> >>> pseudo cluster.
> >>>
> >>> My +1 (non-binding) to start.
> >>>
> >>> BR,
> >>> Zhankun
> >>>
> >>> On Wed, 4 Sep 2019 at 15:56, zhankun tang 
> wrote:
> >>>
>  Hi all,
> 
>  Thanks for everyone helping in resolving all the blockers targeting
> >> Hadoop
>  3.1.3[1]. We've cleaned all the blockers and moved out non-blockers
> >> issues
>  to 3.1.4.
> 
>  I'll cut the branch today and call a release vote soon. Thanks!
> 
> 
>  [1]. https://s.apache.org/5hj5i
> 
>  BR,
>  Zhankun
> 
> 
>  On Wed, 21 Aug 2019 at 12:38, Zhankun Tang  wrote:
> 
> > Hi folks,
> >
> > We have Apache Hadoop 3.1.2 released on Feb 2019.
> >
> > It's been more than 6 months passed and there're
> >
> > 246 fixes[1]. 2 blocker and 4 critical Issues [2]
> >
> > (As Wei-Chiu Chuang mentioned, HDFS-13596 will be another blocker)
> >
> >
> > I propose my plan to do a maintenance release of 3.1.3 in the next
> few
> > (one or two) weeks.
> >
> > Hadoop 3.1.3 release plan:
> >
> > Code Freezing Date: *25th August 2019 PDT*
> >
> > Release Date: *31th August 2019 PDT*
> >
> >
> > Please feel free to share your insights on this. Thanks!
> >
> >
> > [1] https://s.apache.org/zw8l5
> >
> > [2] https://s.apache.org/fjol5
> >
> >
> > BR,
> >
> > Zhankun
> >
> >>
>
>


Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-18 Thread Sandeep Nemuri
+1 (non-binding)



On Wed, Sep 18, 2019 at 5:33 AM Weiwei Yang  wrote:

> +1 (binding)
>
> Thanks
> Weiwei
>
> On Wed, Sep 18, 2019 at 6:35 AM Wangda Tan  wrote:
>
> > +1 (binding).
> >
> > From my experiences of Submarine project, I think moving to a separate
> repo
> > helps.
> >
> > - Wangda
> >
> > On Tue, Sep 17, 2019 at 11:41 AM Subru Krishnan 
> wrote:
> >
> > > +1 (binding).
> > >
> > > IIUC, there will not be an Ozone module in trunk anymore as that was my
> > > only concern from the original discussion thread? IMHO, this should be
> > the
> > > default approach for new modules.
> > >
> > > On Tue, Sep 17, 2019 at 9:58 AM Salvatore LaMendola (BLOOMBERG/ 731
> LEX)
> > <
> > > slamendo...@bloomberg.net> wrote:
> > >
> > > > +1
> > > >
> > > > From: e...@apache.org At: 09/17/19 05:48:32To:
> > > hdfs-...@hadoop.apache.org,
> > > > mapreduce-...@hadoop.apache.org,  common-dev@hadoop.apache.org,
> > > > yarn-...@hadoop.apache.org
> > > > Subject: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk
> > > > source tree
> > > >
> > > >
> > > > TLDR; I propose to move Ozone related code out from Hadoop trunk and
> > > > store it in a separated *Hadoop* git repository
> apache/hadoop-ozone.git
> > > >
> > > >
> > > > When Ozone was adopted as a new Hadoop subproject it was proposed[1]
> to
> > > > be part of the source tree but with separated release cadence, mainly
> > > > because it had the hadoop-trunk/SNAPSHOT as compile time dependency.
> > > >
> > > > During the last Ozone releases this dependency is removed to provide
> > > > more stable releases. Instead of using the latest trunk/SNAPSHOT
> build
> > > > from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
> > > >
> > > > As we have no more strict dependency between Hadoop trunk SNAPSHOT
> and
> > > > Ozone trunk I propose to separate the two code base from each other
> > with
> > > > creating a new Hadoop git repository (apache/hadoop-ozone.git):
> > > >
> > > > With moving Ozone to a separated git repository:
> > > >
> > > >   * It would be easier to contribute and understand the build (as of
> > now
> > > > we always need `-f pom.ozone.xml` as a Maven parameter)
> > > >   * It would be possible to adjust build process without breaking
> > > > Hadoop/Ozone builds.
> > > >   * It would be possible to use different Readme/.asf.yaml/github
> > > > template for the Hadoop Ozone and core Hadoop. (For example the
> current
> > > > github template [2] has a link to the contribution guideline [3].
> Ozone
> > > > has an extended version [4] from this guideline with additional
> > > > information.)
> > > >   * Testing would be more safe as it won't be possible to change core
> > > > Hadoop and Hadoop Ozone in the same patch.
> > > >   * It would be easier to cut branches for Hadoop releases (based on
> > the
> > > > original consensus, Ozone should be removed from all the release
> > > > branches after creating relase branches from trunk)
> > > >
> > > >
> > > > What do you think?
> > > >
> > > > Thanks,
> > > > Marton
> > > >
> > > > [1]:
> > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8
> > > > c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> > > > [2]:
> > > >
> > > >
> > >
> >
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> > > > [3]:
> > > https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> > > > [4]:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
> > > >
> > > > -
> > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > > >
> > > >
> > > >
> > >
> >
>


-- 
*  Regards*
*  Sandeep Nemuri*


Re: [VOTE] Release Hadoop-3.1.3-RC0

2019-09-18 Thread Masatake Iwasaki

Hi Zhankun,

> Can you please help to provide a branch-3.1 patch for HDFS-14759? Or 
we can

> move it to the next release of branch-3.1 since the noisy info is not a
> blocker issue to me. Does that make sense?

I tried to cherry-picking the HDFS-14759 on my side.
Since there was no conflict, I pushed the branch-3.1 to apache repo.
Could you pull and check it?

Masatake

On 9/18/19 15:49, Zhankun Tang wrote:

Hi Masatake,

Thanks for helping to verify!
I checked that branch-3.2 has the HDFS-14759 committed already.
Release-3.2.1-RC0 should have no such issue.

For branch-3.1, cherry-pick the same commit has conflicts. I'm confirming
if we can fix it or there's a feasible plan to backport the whole
Hadoop-15226 (which targeting branch-3.2) to branch-3.1.

Can you please help to provide a branch-3.1 patch for HDFS-14759? Or we can
move it to the next release of branch-3.1 since the noisy info is not a
blocker issue to me. Does that make sense?


BR,
Zhankun

On Wed, 18 Sep 2019 at 14:23, Masatake Iwasaki 
wrote:


Thanks for putting this up, Zhankun Tang.

While I was testing the RC0 wich CLI,
noisy INFO message was emitted on every data transfer operation.::

2019-09-17 16:00:42,942 INFO sasl.SaslDataTransferClient: SASL
encryption trust check: localHostTrusted = false, remoteHostTrusted = false

The issue was fixed by HDFS-14759.
I think the fix should be backported if we cut RC1.

Since the fix version of HDFS-14759 is 3.3.0, CR0 of 3.2.1 could have
same issue.

Regards,
Masatake Iwasaki

On 9/12/19 17:04, Zhankun Tang wrote:

Hi folks,

Thanks to everyone's help on this release. Special thanks to Rohith,
Wei-Chiu, Akira, Sunil, Wangda!

I have created a release candidate (RC0) for Apache Hadoop 3.1.3.

The RC release artifacts are available at:
http://home.apache.org/~ztang/hadoop-3.1.3-RC0/

The maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1228/

The RC tag in git is here:
https://github.com/apache/hadoop/tree/release-3.1.3-RC0

And my public key is at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

*This vote will run for 7 days, ending on Sept.19th at 11:59 pm PST.*

For the testing, I have run several Spark and distributed shell jobs in

my

pseudo cluster.

My +1 (non-binding) to start.

BR,
Zhankun

On Wed, 4 Sep 2019 at 15:56, zhankun tang  wrote:


Hi all,

Thanks for everyone helping in resolving all the blockers targeting

Hadoop

3.1.3[1]. We've cleaned all the blockers and moved out non-blockers

issues

to 3.1.4.

I'll cut the branch today and call a release vote soon. Thanks!


[1]. https://s.apache.org/5hj5i

BR,
Zhankun


On Wed, 21 Aug 2019 at 12:38, Zhankun Tang  wrote:


Hi folks,

We have Apache Hadoop 3.1.2 released on Feb 2019.

It's been more than 6 months passed and there're

246 fixes[1]. 2 blocker and 4 critical Issues [2]

(As Wei-Chiu Chuang mentioned, HDFS-13596 will be another blocker)


I propose my plan to do a maintenance release of 3.1.3 in the next few
(one or two) weeks.

Hadoop 3.1.3 release plan:

Code Freezing Date: *25th August 2019 PDT*

Release Date: *31th August 2019 PDT*


Please feel free to share your insights on this. Thanks!


[1] https://s.apache.org/zw8l5

[2] https://s.apache.org/fjol5


BR,

Zhankun






-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Hadoop-3.1.3-RC0

2019-09-18 Thread Zhankun Tang
Hi Masatake,

Thanks for helping to verify!
I checked that branch-3.2 has the HDFS-14759 committed already.
Release-3.2.1-RC0 should have no such issue.

For branch-3.1, cherry-pick the same commit has conflicts. I'm confirming
if we can fix it or there's a feasible plan to backport the whole
Hadoop-15226 (which targeting branch-3.2) to branch-3.1.

Can you please help to provide a branch-3.1 patch for HDFS-14759? Or we can
move it to the next release of branch-3.1 since the noisy info is not a
blocker issue to me. Does that make sense?


BR,
Zhankun

On Wed, 18 Sep 2019 at 14:23, Masatake Iwasaki 
wrote:

> Thanks for putting this up, Zhankun Tang.
>
> While I was testing the RC0 wich CLI,
> noisy INFO message was emitted on every data transfer operation.::
>
>2019-09-17 16:00:42,942 INFO sasl.SaslDataTransferClient: SASL
> encryption trust check: localHostTrusted = false, remoteHostTrusted = false
>
> The issue was fixed by HDFS-14759.
> I think the fix should be backported if we cut RC1.
>
> Since the fix version of HDFS-14759 is 3.3.0, CR0 of 3.2.1 could have
> same issue.
>
> Regards,
> Masatake Iwasaki
>
> On 9/12/19 17:04, Zhankun Tang wrote:
> > Hi folks,
> >
> > Thanks to everyone's help on this release. Special thanks to Rohith,
> > Wei-Chiu, Akira, Sunil, Wangda!
> >
> > I have created a release candidate (RC0) for Apache Hadoop 3.1.3.
> >
> > The RC release artifacts are available at:
> > http://home.apache.org/~ztang/hadoop-3.1.3-RC0/
> >
> > The maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1228/
> >
> > The RC tag in git is here:
> > https://github.com/apache/hadoop/tree/release-3.1.3-RC0
> >
> > And my public key is at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > *This vote will run for 7 days, ending on Sept.19th at 11:59 pm PST.*
> >
> > For the testing, I have run several Spark and distributed shell jobs in
> my
> > pseudo cluster.
> >
> > My +1 (non-binding) to start.
> >
> > BR,
> > Zhankun
> >
> > On Wed, 4 Sep 2019 at 15:56, zhankun tang  wrote:
> >
> >> Hi all,
> >>
> >> Thanks for everyone helping in resolving all the blockers targeting
> Hadoop
> >> 3.1.3[1]. We've cleaned all the blockers and moved out non-blockers
> issues
> >> to 3.1.4.
> >>
> >> I'll cut the branch today and call a release vote soon. Thanks!
> >>
> >>
> >> [1]. https://s.apache.org/5hj5i
> >>
> >> BR,
> >> Zhankun
> >>
> >>
> >> On Wed, 21 Aug 2019 at 12:38, Zhankun Tang  wrote:
> >>
> >>> Hi folks,
> >>>
> >>> We have Apache Hadoop 3.1.2 released on Feb 2019.
> >>>
> >>> It's been more than 6 months passed and there're
> >>>
> >>> 246 fixes[1]. 2 blocker and 4 critical Issues [2]
> >>>
> >>> (As Wei-Chiu Chuang mentioned, HDFS-13596 will be another blocker)
> >>>
> >>>
> >>> I propose my plan to do a maintenance release of 3.1.3 in the next few
> >>> (one or two) weeks.
> >>>
> >>> Hadoop 3.1.3 release plan:
> >>>
> >>> Code Freezing Date: *25th August 2019 PDT*
> >>>
> >>> Release Date: *31th August 2019 PDT*
> >>>
> >>>
> >>> Please feel free to share your insights on this. Thanks!
> >>>
> >>>
> >>> [1] https://s.apache.org/zw8l5
> >>>
> >>> [2] https://s.apache.org/fjol5
> >>>
> >>>
> >>> BR,
> >>>
> >>> Zhankun
> >>>
>
>


Re: [VOTE] Release Hadoop-3.1.3-RC0

2019-09-18 Thread Masatake Iwasaki

Thanks for putting this up, Zhankun Tang.

While I was testing the RC0 wich CLI,
noisy INFO message was emitted on every data transfer operation.::

  2019-09-17 16:00:42,942 INFO sasl.SaslDataTransferClient: SASL 
encryption trust check: localHostTrusted = false, remoteHostTrusted = false


The issue was fixed by HDFS-14759.
I think the fix should be backported if we cut RC1.

Since the fix version of HDFS-14759 is 3.3.0, CR0 of 3.2.1 could have 
same issue.


Regards,
Masatake Iwasaki

On 9/12/19 17:04, Zhankun Tang wrote:

Hi folks,

Thanks to everyone's help on this release. Special thanks to Rohith,
Wei-Chiu, Akira, Sunil, Wangda!

I have created a release candidate (RC0) for Apache Hadoop 3.1.3.

The RC release artifacts are available at:
http://home.apache.org/~ztang/hadoop-3.1.3-RC0/

The maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1228/

The RC tag in git is here:
https://github.com/apache/hadoop/tree/release-3.1.3-RC0

And my public key is at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

*This vote will run for 7 days, ending on Sept.19th at 11:59 pm PST.*

For the testing, I have run several Spark and distributed shell jobs in my
pseudo cluster.

My +1 (non-binding) to start.

BR,
Zhankun

On Wed, 4 Sep 2019 at 15:56, zhankun tang  wrote:


Hi all,

Thanks for everyone helping in resolving all the blockers targeting Hadoop
3.1.3[1]. We've cleaned all the blockers and moved out non-blockers issues
to 3.1.4.

I'll cut the branch today and call a release vote soon. Thanks!


[1]. https://s.apache.org/5hj5i

BR,
Zhankun


On Wed, 21 Aug 2019 at 12:38, Zhankun Tang  wrote:


Hi folks,

We have Apache Hadoop 3.1.2 released on Feb 2019.

It's been more than 6 months passed and there're

246 fixes[1]. 2 blocker and 4 critical Issues [2]

(As Wei-Chiu Chuang mentioned, HDFS-13596 will be another blocker)


I propose my plan to do a maintenance release of 3.1.3 in the next few
(one or two) weeks.

Hadoop 3.1.3 release plan:

Code Freezing Date: *25th August 2019 PDT*

Release Date: *31th August 2019 PDT*


Please feel free to share your insights on this. Thanks!


[1] https://s.apache.org/zw8l5

[2] https://s.apache.org/fjol5


BR,

Zhankun




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org