Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-13 Thread Xiaoqiao He
Thanks Brahma Reddy Battula for your great work here.
Stephen fixed lease leak in namenode, and it is ready now:
https://issues.apache.org/jira/browse/HDFS-14498.
I think this affects 3.3.0-RC0. Would you check this?
Sorry for reporting it so late.

On Mon, Jul 13, 2020 at 2:53 PM Takanobu Asanuma 
wrote:

> +1(non-binding)
>- verified checksums
>- succeeded in building the package with OpenJDK 8
>- started HDFS cluster with kerberos with OpenJDK 11
>- verified Web UIs (NN, DN, Router)
>- Ran some operations of Router-based Federation with Security
>- Ran some operations of Erasure Coding
>
> Thanks,
> Takanobu
>
> 
> From: Bilwa S T 
> Sent: Monday, July 13, 2020 2:23
> To: Surendra Singh Lilhore; hemanth boyina
> Cc: Brahma Reddy Battula; mapreduce-dev; Hdfs-dev; Hadoop Common; yarn-dev
> Subject: RE: [VOTE] Release Apache Hadoop 3.3.0 - RC0
>
> +1(non-binding)
>
> 1. Deployed 3 node cluster
> 2. Browsed through Web UI (RM, NM)
> 3. Executed Jobs (pi, wordcount, TeraGen, TeraSort)
> 4. Verified basic yarn commands
>
> Thanks,
> Bilwa
>
> -Original Message-
> From: Surendra Singh Lilhore [mailto:surendralilh...@gmail.com]
> Sent: 12 July 2020 18:32
> To: hemanth boyina 
> Cc: Iñigo Goiri ; Vinayakumar B <
> vinayakum...@apache.org>; Brahma Reddy Battula ;
> mapreduce-dev ; Hdfs-dev <
> hdfs-dev@hadoop.apache.org>; Hadoop Common ;
> yarn-dev 
> Subject: Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0
>
> +1(binding)
>
> Deployed HDFS and Yarn Cluster
> > Verified basic shell commands
> > Ran some jobs
> > Verified UI
>
> -Surendra
>
> On Sat, Jul 11, 2020 at 9:41 PM hemanth boyina  >
> wrote:
>
> > +1(non-binding)
> > Deployed Cluster with Namenodes and Router *)verified shell commands
> > *)Executed various jobs *)Browsed UI's
> >
> >
> > Thanks,
> > HemanthBoyina
> >
> >
> > On Sat, 11 Jul 2020, 00:05 Iñigo Goiri,  wrote:
> >
> > > +1 (Binding)
> > >
> > > Deployed a cluster on Azure VMs with:
> > > * 3 VMs with HDFS Namenodes and Routers
> > > * 2 VMs with YARN Resource Managers
> > > * 5 VMs with HDFS Datanodes and Node Managers
> > >
> > > Tests:
> > > * Executed Tergagen+Terasort+Teravalidate.
> > > * Executed wordcount.
> > > * Browsed through the Web UI.
> > >
> > >
> > >
> > > On Fri, Jul 10, 2020 at 1:06 AM Vinayakumar B
> > > 
> > > wrote:
> > >
> > > > +1 (Binding)
> > > >
> > > > -Verified all checksums and Signatures.
> > > > -Verified site, Release notes and Change logs
> > > >   + May be changelog and release notes could be grouped based on
> > > > the project at second level for better look (this needs to be
> > > > supported
> > from
> > > > yetus)
> > > > -Tested in x86 local 3-node docker cluster.
> > > >   + Built from source with OpenJdk 8 and Ubuntu 18.04
> > > >   + Deployed 3 node docker cluster
> > > >   + Ran various Jobs (wordcount, Terasort, Pi, etc)
> > > >
> > > > No Issues reported.
> > > >
> > > > -Vinay
> > > >
> > > > On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu 
> > > wrote:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > - checkout the "3.3.0-aarch64-RC0" binaries packages
> > > > >
> > > > > - started a clusters with 3 nodes VMs of Ubuntu 18.04
> > > > > ARM/aarch64, openjdk-11-jdk
> > > > >
> > > > > - checked some web UIs (NN, DN, RM, NM)
> > > > >
> > > > > - Executed a wordcount, TeraGen, TeraSort and TeraValidate
> > > > >
> > > > > - Executed a TestDFSIO job
> > > > >
> > > > > - Executed a Pi job
> > > > >
> > > > > BR,
> > > > > Liusheng
> > > > >
> > > > > Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > - Verified all hashes and checksums
> > > > > > - Tested on ARM platform for the following actions:
> > > > > >   + Built from source on Ubuntu 18.04, OpenJDK 8
> > > > > >   + Deployed a pseudo cluster
> > > > > >   + Ran some example jobs(grep, wordcount, pi)
> > > > > >   + Ran teragen/terasort/teravalidate
> > > > > >   + Ran TestDFSIO job
> > > > > >
> > > > > > BR,
> > > > > >
> > > > > > Zhenyu
> > > > > >
> > > > > > On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka
> > > > > >  > >
> > > > > wrote:
> > > > > >
> > > > > > > +1 (binding)
> > > > > > >
> > > > > > > - Verified checksums and signatures.
> > > > > > > - Built from the source with CentOS 7 and OpenJDK 8.
> > > > > > > - Successfully upgraded HDFS to 3.3.0-RC0 in our development
> > > cluster
> > > > > > (with
> > > > > > > RBF, security, and OpenJDK 11) for end-users. No issues
> reported.
> > > > > > > - The document looks good.
> > > > > > > - Deployed pseudo cluster and ran some MapReduce jobs.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Akira
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Jul 7, 2020 at 7:27 AM Brahma Reddy Battula <
> > > > bra...@apache.org
> > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi folks,
> > > > > > > >
> > > > > > > > This is the first release candidate for the first release
> > > > > > > > of
> > > Apache
> > > >

Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-13 Thread Brahma Reddy Battula
Hi Xiaoqiao,

Thanks for bringing this to my attention.

It's too rare to occur this scenario and a couple of releases went after
HDFS-4882(broken jira).

So,You think it's fine to include in the next release? Or if you still
insist we can go for another RC...


On Mon, Jul 13, 2020 at 12:29 PM Xiaoqiao He  wrote:

> Thanks Brahma Reddy Battula for your great work here.
> Stephen fixed lease leak in namenode, and it is ready now:
> https://issues.apache.org/jira/browse/HDFS-14498.
> I think this affects 3.3.0-RC0. Would you check this?
> Sorry for reporting it so late.
>
> On Mon, Jul 13, 2020 at 2:53 PM Takanobu Asanuma 
> wrote:
>
>> +1(non-binding)
>>- verified checksums
>>- succeeded in building the package with OpenJDK 8
>>- started HDFS cluster with kerberos with OpenJDK 11
>>- verified Web UIs (NN, DN, Router)
>>- Ran some operations of Router-based Federation with Security
>>- Ran some operations of Erasure Coding
>>
>> Thanks,
>> Takanobu
>>
>> 
>> From: Bilwa S T 
>> Sent: Monday, July 13, 2020 2:23
>> To: Surendra Singh Lilhore; hemanth boyina
>> Cc: Brahma Reddy Battula; mapreduce-dev; Hdfs-dev; Hadoop Common; yarn-dev
>> Subject: RE: [VOTE] Release Apache Hadoop 3.3.0 - RC0
>>
>> +1(non-binding)
>>
>> 1. Deployed 3 node cluster
>> 2. Browsed through Web UI (RM, NM)
>> 3. Executed Jobs (pi, wordcount, TeraGen, TeraSort)
>> 4. Verified basic yarn commands
>>
>> Thanks,
>> Bilwa
>>
>> -Original Message-
>> From: Surendra Singh Lilhore [mailto:surendralilh...@gmail.com]
>> Sent: 12 July 2020 18:32
>> To: hemanth boyina 
>> Cc: Iñigo Goiri ; Vinayakumar B <
>> vinayakum...@apache.org>; Brahma Reddy Battula ;
>> mapreduce-dev ; Hdfs-dev <
>> hdfs-dev@hadoop.apache.org>; Hadoop Common ;
>> yarn-dev 
>> Subject: Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0
>>
>> +1(binding)
>>
>> Deployed HDFS and Yarn Cluster
>> > Verified basic shell commands
>> > Ran some jobs
>> > Verified UI
>>
>> -Surendra
>>
>> On Sat, Jul 11, 2020 at 9:41 PM hemanth boyina <
>> hemanthboyina...@gmail.com>
>> wrote:
>>
>> > +1(non-binding)
>> > Deployed Cluster with Namenodes and Router *)verified shell commands
>> > *)Executed various jobs *)Browsed UI's
>> >
>> >
>> > Thanks,
>> > HemanthBoyina
>> >
>> >
>> > On Sat, 11 Jul 2020, 00:05 Iñigo Goiri,  wrote:
>> >
>> > > +1 (Binding)
>> > >
>> > > Deployed a cluster on Azure VMs with:
>> > > * 3 VMs with HDFS Namenodes and Routers
>> > > * 2 VMs with YARN Resource Managers
>> > > * 5 VMs with HDFS Datanodes and Node Managers
>> > >
>> > > Tests:
>> > > * Executed Tergagen+Terasort+Teravalidate.
>> > > * Executed wordcount.
>> > > * Browsed through the Web UI.
>> > >
>> > >
>> > >
>> > > On Fri, Jul 10, 2020 at 1:06 AM Vinayakumar B
>> > > 
>> > > wrote:
>> > >
>> > > > +1 (Binding)
>> > > >
>> > > > -Verified all checksums and Signatures.
>> > > > -Verified site, Release notes and Change logs
>> > > >   + May be changelog and release notes could be grouped based on
>> > > > the project at second level for better look (this needs to be
>> > > > supported
>> > from
>> > > > yetus)
>> > > > -Tested in x86 local 3-node docker cluster.
>> > > >   + Built from source with OpenJdk 8 and Ubuntu 18.04
>> > > >   + Deployed 3 node docker cluster
>> > > >   + Ran various Jobs (wordcount, Terasort, Pi, etc)
>> > > >
>> > > > No Issues reported.
>> > > >
>> > > > -Vinay
>> > > >
>> > > > On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu 
>> > > wrote:
>> > > >
>> > > > > +1 (non-binding)
>> > > > >
>> > > > > - checkout the "3.3.0-aarch64-RC0" binaries packages
>> > > > >
>> > > > > - started a clusters with 3 nodes VMs of Ubuntu 18.04
>> > > > > ARM/aarch64, openjdk-11-jdk
>> > > > >
>> > > > > - checked some web UIs (NN, DN, RM, NM)
>> > > > >
>> > > > > - Executed a wordcount, TeraGen, TeraSort and TeraValidate
>> > > > >
>> > > > > - Executed a TestDFSIO job
>> > > > >
>> > > > > - Executed a Pi job
>> > > > >
>> > > > > BR,
>> > > > > Liusheng
>> > > > >
>> > > > > Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:
>> > > > >
>> > > > > > +1 (non-binding)
>> > > > > >
>> > > > > > - Verified all hashes and checksums
>> > > > > > - Tested on ARM platform for the following actions:
>> > > > > >   + Built from source on Ubuntu 18.04, OpenJDK 8
>> > > > > >   + Deployed a pseudo cluster
>> > > > > >   + Ran some example jobs(grep, wordcount, pi)
>> > > > > >   + Ran teragen/terasort/teravalidate
>> > > > > >   + Ran TestDFSIO job
>> > > > > >
>> > > > > > BR,
>> > > > > >
>> > > > > > Zhenyu
>> > > > > >
>> > > > > > On Fri, Jul 10, 2020 at 2:40 PM Akira Ajisaka
>> > > > > > > > >
>> > > > > wrote:
>> > > > > >
>> > > > > > > +1 (binding)
>> > > > > > >
>> > > > > > > - Verified checksums and signatures.
>> > > > > > > - Built from the source with CentOS 7 and OpenJDK 8.
>> > > > > > > - Successfully upgraded HDFS to 3.3.0-RC0 in our development
>> > > cluster
>> > > > > > (with
>> > > > > > > RBF, security, and OpenJDK 11) for end-users. N

Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-13 Thread Masatake Iwasaki

Thanks Brahma for putting this up,

+1 (binding).

* verified the signature and checksum of source tarball.
* build from source tarball with native profile enabled on CentOS 7 and 
CentOS 8 with OpenJDK 8.

* build documentation and skimmed the contents.
* ran example jobs on pseudo-distributed cluster on CentOS 7 and CentOS 8.
* ran example jobs on 3 nodes docker cluster with NN-HA and RM-HA 
enblaed on CentOS 7.


Masatake Iwasaki

On 2020/07/07 7:27, Brahma Reddy Battula wrote:

Hi folks,

This is the first release candidate for the first release of Apache
Hadoop 3.3.0
line.

It contains *1644[1]* fixed jira issues since 3.2.1 which include a lot of
features and improvements(read the full set of release notes).

Below feature additions are the highlights of the release.

- ARM Support
- Enhancements and new features on S3a,S3Guard,ABFS
- Java 11 Runtime support and TLS 1.3.
- Support Tencent Cloud COS File System.
- Added security to HDFS Router.
- Support non-volatile storage class memory(SCM) in HDFS cache directives
- Support Interactive Docker Shell for running Containers.
- Scheduling of opportunistic containers
- A pluggable device plugin framework to ease vendor plugin development

*The RC0 artifacts are at*: http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/

*First release to include ARM binary, Have a check.*
*RC tag is *release-3.3.0-RC0.


*The maven artifacts are hosted here:*
https://repository.apache.org/content/repositories/orgapachehadoop-1271/

*My public key is available here:*
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM IST.


I have done a few testing with my pseudo cluster. My +1 to start.



Regards,
Brahma Reddy Battula


1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.3.0) AND
fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER BY
fixVersion ASC



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop 3.1.4 (RC3)

2020-07-13 Thread Gabor Bota
Hi folks,

I have put together a release candidate (RC3) for Hadoop 3.1.4.

*
The RC includes in addition to the previous ones:
* fix of YARN-10347. Fix double locking in
CapacityScheduler#reinitialize in branch-3.1
(https://issues.apache.org/jira/browse/YARN-10347)
* the revert of HDFS-14941, as it caused
HDFS-15421. IBR leak causes standby NN to be stuck in safe mode.
(https://issues.apache.org/jira/browse/HDFS-15421)
* HDFS-15323, as requested.
(https://issues.apache.org/jira/browse/HDFS-15323)
*

The RC is available at: http://people.apache.org/~gabota/hadoop-3.1.4-RC3/
The RC tag in git is here:
https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC3
The maven artifacts are staged at
https://repository.apache.org/content/repositories/orgapachehadoop-1274/

You can find my public key at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
and http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C

Please try the release and vote. The vote will run for 7 weekdays,
until July 22. 2020. 23:00 CET.


Thanks,
Gabor

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-13 Thread Xiaoqiao He
Thanks Brahma, it makes sense for me to include that patch in the next
release.

+1 (non-binding).

   - Successfully built from source with native on CentOS6 with jdk1.8.112.
   - Setup a pseudo-distributed cluster with HDFS RBF arch and enable
   security using Kerberos on CentOS6.
   - Run simple HDFS Shell commands through
   Router(mkdir/chown/chmod/rename/put/get/ls/cat/stat/rmr).
   - Executed WordCount, Pi, Grep jobs.
   - Browsed the master WebUI (JN, NN, Router, RM).

Thanks,
He Xiaoqiao


On Mon, Jul 13, 2020 at 5:52 PM Brahma Reddy Battula 
wrote:

> Hi Xiaoqiao,
>
> Thanks for bringing this to my attention.
>
> It's too rare to occur this scenario and a couple of releases went after
> HDFS-4882(broken jira).
>
> So,You think it's fine to include in the next release? Or if you still
> insist we can go for another RC...
>
>
> On Mon, Jul 13, 2020 at 12:29 PM Xiaoqiao He 
> wrote:
>
>> Thanks Brahma Reddy Battula for your great work here.
>> Stephen fixed lease leak in namenode, and it is ready now:
>> https://issues.apache.org/jira/browse/HDFS-14498.
>> I think this affects 3.3.0-RC0. Would you check this?
>> Sorry for reporting it so late.
>>
>> On Mon, Jul 13, 2020 at 2:53 PM Takanobu Asanuma 
>> wrote:
>>
>>> +1(non-binding)
>>>- verified checksums
>>>- succeeded in building the package with OpenJDK 8
>>>- started HDFS cluster with kerberos with OpenJDK 11
>>>- verified Web UIs (NN, DN, Router)
>>>- Ran some operations of Router-based Federation with Security
>>>- Ran some operations of Erasure Coding
>>>
>>> Thanks,
>>> Takanobu
>>>
>>> 
>>> From: Bilwa S T 
>>> Sent: Monday, July 13, 2020 2:23
>>> To: Surendra Singh Lilhore; hemanth boyina
>>> Cc: Brahma Reddy Battula; mapreduce-dev; Hdfs-dev; Hadoop Common;
>>> yarn-dev
>>> Subject: RE: [VOTE] Release Apache Hadoop 3.3.0 - RC0
>>>
>>> +1(non-binding)
>>>
>>> 1. Deployed 3 node cluster
>>> 2. Browsed through Web UI (RM, NM)
>>> 3. Executed Jobs (pi, wordcount, TeraGen, TeraSort)
>>> 4. Verified basic yarn commands
>>>
>>> Thanks,
>>> Bilwa
>>>
>>> -Original Message-
>>> From: Surendra Singh Lilhore [mailto:surendralilh...@gmail.com]
>>> Sent: 12 July 2020 18:32
>>> To: hemanth boyina 
>>> Cc: Iñigo Goiri ; Vinayakumar B <
>>> vinayakum...@apache.org>; Brahma Reddy Battula ;
>>> mapreduce-dev ; Hdfs-dev <
>>> hdfs-dev@hadoop.apache.org>; Hadoop Common ;
>>> yarn-dev 
>>> Subject: Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0
>>>
>>> +1(binding)
>>>
>>> Deployed HDFS and Yarn Cluster
>>> > Verified basic shell commands
>>> > Ran some jobs
>>> > Verified UI
>>>
>>> -Surendra
>>>
>>> On Sat, Jul 11, 2020 at 9:41 PM hemanth boyina <
>>> hemanthboyina...@gmail.com>
>>> wrote:
>>>
>>> > +1(non-binding)
>>> > Deployed Cluster with Namenodes and Router *)verified shell commands
>>> > *)Executed various jobs *)Browsed UI's
>>> >
>>> >
>>> > Thanks,
>>> > HemanthBoyina
>>> >
>>> >
>>> > On Sat, 11 Jul 2020, 00:05 Iñigo Goiri,  wrote:
>>> >
>>> > > +1 (Binding)
>>> > >
>>> > > Deployed a cluster on Azure VMs with:
>>> > > * 3 VMs with HDFS Namenodes and Routers
>>> > > * 2 VMs with YARN Resource Managers
>>> > > * 5 VMs with HDFS Datanodes and Node Managers
>>> > >
>>> > > Tests:
>>> > > * Executed Tergagen+Terasort+Teravalidate.
>>> > > * Executed wordcount.
>>> > > * Browsed through the Web UI.
>>> > >
>>> > >
>>> > >
>>> > > On Fri, Jul 10, 2020 at 1:06 AM Vinayakumar B
>>> > > 
>>> > > wrote:
>>> > >
>>> > > > +1 (Binding)
>>> > > >
>>> > > > -Verified all checksums and Signatures.
>>> > > > -Verified site, Release notes and Change logs
>>> > > >   + May be changelog and release notes could be grouped based on
>>> > > > the project at second level for better look (this needs to be
>>> > > > supported
>>> > from
>>> > > > yetus)
>>> > > > -Tested in x86 local 3-node docker cluster.
>>> > > >   + Built from source with OpenJdk 8 and Ubuntu 18.04
>>> > > >   + Deployed 3 node docker cluster
>>> > > >   + Ran various Jobs (wordcount, Terasort, Pi, etc)
>>> > > >
>>> > > > No Issues reported.
>>> > > >
>>> > > > -Vinay
>>> > > >
>>> > > > On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu 
>>> > > wrote:
>>> > > >
>>> > > > > +1 (non-binding)
>>> > > > >
>>> > > > > - checkout the "3.3.0-aarch64-RC0" binaries packages
>>> > > > >
>>> > > > > - started a clusters with 3 nodes VMs of Ubuntu 18.04
>>> > > > > ARM/aarch64, openjdk-11-jdk
>>> > > > >
>>> > > > > - checked some web UIs (NN, DN, RM, NM)
>>> > > > >
>>> > > > > - Executed a wordcount, TeraGen, TeraSort and TeraValidate
>>> > > > >
>>> > > > > - Executed a TestDFSIO job
>>> > > > >
>>> > > > > - Executed a Pi job
>>> > > > >
>>> > > > > BR,
>>> > > > > Liusheng
>>> > > > >
>>> > > > > Zhenyu Zheng  于2020年7月10日周五 下午3:45写道:
>>> > > > >
>>> > > > > > +1 (non-binding)
>>> > > > > >
>>> > > > > > - Verified all hashes and checksums
>>> > > > > > - Tested on ARM platform for the following actions:
>>> > > > > >   + Built from source on Ubun

[jira] [Resolved] (HDFS-13934) Multipart uploaders to be created through API call to FileSystem/FileContext, not service loader

2020-07-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-13934.
---
Fix Version/s: 3.3.1
   Resolution: Fixed

merged to trunk & 3.3. No plans to backport further

> Multipart uploaders to be created through API call to FileSystem/FileContext, 
> not service loader
> 
>
> Key: HDFS-13934
> URL: https://issues.apache.org/jira/browse/HDFS-13934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, fs/s3, hdfs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.1
>
>
> the Multipart Uploaders are created via service loaders. This is troublesome
> # HADOOP-12636, HADOOP-13323, HADOOP-13625 highlight how the load process 
> forces the transient loading of dependencies.  If a dependent class cannot be 
> loaded (e.g aws-sdk is not on the classpath), that service won't load. 
> Without error handling round the load process, this stops any uploader from 
> loading. Even with that error handling, the performance hit of that load, 
> especially with reshaded dependencies, hurts performance (HADOOP-13138).
> # it makes wrapping the the load with any filter impossible, stops transitive 
> binding through viewFS, mocking, etc.
> # It complicates security in a kerberized world. If you have an FS instance 
> of user A, then you should be able to create an MPU instance with that user's 
> permissions. currently, if a service were to try to create one, you'd be 
> looking at doAs() games around the service loading, and a more complex bind 
> process.
> Proposed
> # remove the service loader mech entirely
> # add to FS & FC as createMultipartUploader(path) call, which will create one 
> bound to the current FS, with its permissions, DTs, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-07-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/746/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.

[jira] [Created] (HDFS-15466) remove src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader META-INF/se

2020-07-13 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-15466:
-

 Summary: remove 
src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader 
src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader 
META-INF/services/org.apache.hadoop.fs.MultipartUploader file
 Key: HDFS-15466
 URL: https://issues.apache.org/jira/browse/HDFS-15466
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: fs, fs/s3
Affects Versions: 3.3.1
Reporter: Steve Loughran


Follow-on to HDFS-13934. (and as usual, only noticed once that is in)

we no longer need the service declarations in
src/main/resources/META-INF/services/org.apache.hadoop.fs.MultipartUploader

There's no harm in having them there -the service loading is no longer used- 
but we should still cut it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-13 Thread CR Hota
+1 (non-binding)

Verified build (using CentOS and OpenJDK 8) and checksums.
Build a couple of small HDFS clusters and verified several router functions.
Started a small dedicated YARN cluster and executed word count mapreduce.
Skimmed through UI and documentation added around router changes.

All the best with the release.

On Mon, Jul 13, 2020 at 5:00 AM Xiaoqiao He  wrote:

> Thanks Brahma, it makes sense for me to include that patch in the next
> release.
>
> +1 (non-binding).
>
>- Successfully built from source with native on CentOS6 with jdk1.8.112.
>- Setup a pseudo-distributed cluster with HDFS RBF arch and enable
>security using Kerberos on CentOS6.
>- Run simple HDFS Shell commands through
>Router(mkdir/chown/chmod/rename/put/get/ls/cat/stat/rmr).
>- Executed WordCount, Pi, Grep jobs.
>- Browsed the master WebUI (JN, NN, Router, RM).
>
> Thanks,
> He Xiaoqiao
>
>
> On Mon, Jul 13, 2020 at 5:52 PM Brahma Reddy Battula 
> wrote:
>
> > Hi Xiaoqiao,
> >
> > Thanks for bringing this to my attention.
> >
> > It's too rare to occur this scenario and a couple of releases went after
> > HDFS-4882(broken jira).
> >
> > So,You think it's fine to include in the next release? Or if you still
> > insist we can go for another RC...
> >
> >
> > On Mon, Jul 13, 2020 at 12:29 PM Xiaoqiao He 
> > wrote:
> >
> >> Thanks Brahma Reddy Battula for your great work here.
> >> Stephen fixed lease leak in namenode, and it is ready now:
> >> https://issues.apache.org/jira/browse/HDFS-14498.
> >> I think this affects 3.3.0-RC0. Would you check this?
> >> Sorry for reporting it so late.
> >>
> >> On Mon, Jul 13, 2020 at 2:53 PM Takanobu Asanuma <
> tasan...@yahoo-corp.jp>
> >> wrote:
> >>
> >>> +1(non-binding)
> >>>- verified checksums
> >>>- succeeded in building the package with OpenJDK 8
> >>>- started HDFS cluster with kerberos with OpenJDK 11
> >>>- verified Web UIs (NN, DN, Router)
> >>>- Ran some operations of Router-based Federation with Security
> >>>- Ran some operations of Erasure Coding
> >>>
> >>> Thanks,
> >>> Takanobu
> >>>
> >>> 
> >>> From: Bilwa S T 
> >>> Sent: Monday, July 13, 2020 2:23
> >>> To: Surendra Singh Lilhore; hemanth boyina
> >>> Cc: Brahma Reddy Battula; mapreduce-dev; Hdfs-dev; Hadoop Common;
> >>> yarn-dev
> >>> Subject: RE: [VOTE] Release Apache Hadoop 3.3.0 - RC0
> >>>
> >>> +1(non-binding)
> >>>
> >>> 1. Deployed 3 node cluster
> >>> 2. Browsed through Web UI (RM, NM)
> >>> 3. Executed Jobs (pi, wordcount, TeraGen, TeraSort)
> >>> 4. Verified basic yarn commands
> >>>
> >>> Thanks,
> >>> Bilwa
> >>>
> >>> -Original Message-
> >>> From: Surendra Singh Lilhore [mailto:surendralilh...@gmail.com]
> >>> Sent: 12 July 2020 18:32
> >>> To: hemanth boyina 
> >>> Cc: Iñigo Goiri ; Vinayakumar B <
> >>> vinayakum...@apache.org>; Brahma Reddy Battula ;
> >>> mapreduce-dev ; Hdfs-dev <
> >>> hdfs-dev@hadoop.apache.org>; Hadoop Common <
> common-...@hadoop.apache.org>;
> >>> yarn-dev 
> >>> Subject: Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0
> >>>
> >>> +1(binding)
> >>>
> >>> Deployed HDFS and Yarn Cluster
> >>> > Verified basic shell commands
> >>> > Ran some jobs
> >>> > Verified UI
> >>>
> >>> -Surendra
> >>>
> >>> On Sat, Jul 11, 2020 at 9:41 PM hemanth boyina <
> >>> hemanthboyina...@gmail.com>
> >>> wrote:
> >>>
> >>> > +1(non-binding)
> >>> > Deployed Cluster with Namenodes and Router *)verified shell commands
> >>> > *)Executed various jobs *)Browsed UI's
> >>> >
> >>> >
> >>> > Thanks,
> >>> > HemanthBoyina
> >>> >
> >>> >
> >>> > On Sat, 11 Jul 2020, 00:05 Iñigo Goiri,  wrote:
> >>> >
> >>> > > +1 (Binding)
> >>> > >
> >>> > > Deployed a cluster on Azure VMs with:
> >>> > > * 3 VMs with HDFS Namenodes and Routers
> >>> > > * 2 VMs with YARN Resource Managers
> >>> > > * 5 VMs with HDFS Datanodes and Node Managers
> >>> > >
> >>> > > Tests:
> >>> > > * Executed Tergagen+Terasort+Teravalidate.
> >>> > > * Executed wordcount.
> >>> > > * Browsed through the Web UI.
> >>> > >
> >>> > >
> >>> > >
> >>> > > On Fri, Jul 10, 2020 at 1:06 AM Vinayakumar B
> >>> > > 
> >>> > > wrote:
> >>> > >
> >>> > > > +1 (Binding)
> >>> > > >
> >>> > > > -Verified all checksums and Signatures.
> >>> > > > -Verified site, Release notes and Change logs
> >>> > > >   + May be changelog and release notes could be grouped based on
> >>> > > > the project at second level for better look (this needs to be
> >>> > > > supported
> >>> > from
> >>> > > > yetus)
> >>> > > > -Tested in x86 local 3-node docker cluster.
> >>> > > >   + Built from source with OpenJdk 8 and Ubuntu 18.04
> >>> > > >   + Deployed 3 node docker cluster
> >>> > > >   + Ran various Jobs (wordcount, Terasort, Pi, etc)
> >>> > > >
> >>> > > > No Issues reported.
> >>> > > >
> >>> > > > -Vinay
> >>> > > >
> >>> > > > On Fri, Jul 10, 2020 at 1:19 PM Sheng Liu <
> liusheng2...@gmail.com>
> >>> > > wrote:
> >>> > > >
> >>> > > > > +1 (non-binding)
> >>> > >

[jira] [Reopened] (HDFS-14498) LeaseManager can loop forever on the file for which create has failed

2020-07-13 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He reopened HDFS-14498:


> LeaseManager can loop forever on the file for which create has failed 
> --
>
> Key: HDFS-14498
> URL: https://issues.apache.org/jira/browse/HDFS-14498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.9.0
>Reporter: Sergey Shelukhin
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-14498.001.patch, HDFS-14498.002.patch
>
>
> The logs from file creation are long gone due to infinite lease logging, 
> however it presumably failed... the client who was trying to write this file 
> is definitely long dead.
> The version includes HDFS-4882.
> We get this log pattern repeating infinitely:
> {noformat}
> 2019-05-16 14:00:16,893 INFO 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: [Lease.  Holder: 
> DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1] has expired hard 
> limit
> 2019-05-16 14:00:16,893 INFO 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Recovering [Lease.  
> Holder: DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 1], src=
> 2019-05-16 14:00:16,893 WARN 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.internalReleaseLease: 
> Failed to release lease for file . Committed blocks are waiting to be 
> minimally replicated. Try again later.
> 2019-05-16 14:00:16,893 WARN 
> [org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@b27557f] 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager: Cannot release the path 
>  in the lease [Lease.  Holder: DFSClient_NONMAPREDUCE_-20898906_61, 
> pending creates: 1]. It will be retried.
> org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: DIR* 
> NameSystem.internalReleaseLease: Failed to release lease for file . 
> Committed blocks are waiting to be minimally replicated. Try again later.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.internalReleaseLease(FSNamesystem.java:3357)
>   at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager.checkLeases(LeaseManager.java:573)
>   at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:509)
>   at java.lang.Thread.run(Thread.java:745)
> $  grep -c "Recovering.*DFSClient_NONMAPREDUCE_-20898906_61, pending creates: 
> 1" hdfs_nn*
> hdfs_nn.log:1068035
> hdfs_nn.log.2019-05-16-14:1516179
> hdfs_nn.log.2019-05-16-15:1538350
> {noformat}
> Aside from an actual bug fix, it might make sense to make LeaseManager not 
> log so much, in case if there are more bugs like this...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-13 Thread Rakesh Radhakrishnan
Thanks Brahma for getting this out!

+1 (binding)

Verified the following and looks fine to me.
 * Built from source with CentOS 7.4 and OpenJDK 1.8.0_232.
 * Deployed 3-node cluster.
 * Verified HDFS web UIs.
 * Tried out a few basic hdfs shell commands.
 * Ran sample Terasort, Wordcount MR jobs.

-Rakesh-

On Tue, Jul 7, 2020 at 3:57 AM Brahma Reddy Battula 
wrote:

> Hi folks,
>
> This is the first release candidate for the first release of Apache
> Hadoop 3.3.0
> line.
>
> It contains *1644[1]* fixed jira issues since 3.2.1 which include a lot of
> features and improvements(read the full set of release notes).
>
> Below feature additions are the highlights of the release.
>
> - ARM Support
> - Enhancements and new features on S3a,S3Guard,ABFS
> - Java 11 Runtime support and TLS 1.3.
> - Support Tencent Cloud COS File System.
> - Added security to HDFS Router.
> - Support non-volatile storage class memory(SCM) in HDFS cache directives
> - Support Interactive Docker Shell for running Containers.
> - Scheduling of opportunistic containers
> - A pluggable device plugin framework to ease vendor plugin development
>
> *The RC0 artifacts are at*:
> http://home.apache.org/~brahma/Hadoop-3.3.0-RC0/
>
> *First release to include ARM binary, Have a check.*
> *RC tag is *release-3.3.0-RC0.
>
>
> *The maven artifacts are hosted here:*
> https://repository.apache.org/content/repositories/orgapachehadoop-1271/
>
> *My public key is available here:*
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The vote will run for 5 weekdays, until Tuesday, July 13 at 3:50 AM IST.
>
>
> I have done a few testing with my pseudo cluster. My +1 to start.
>
>
>
> Regards,
> Brahma Reddy Battula
>
>
> 1. project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in (3.3.0) AND
> fixVersion not in (3.2.0, 3.2.1, 3.1.3) AND status = Resolved ORDER BY
> fixVersion ASC
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-13 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/202/

[Jul 12, 2020 6:50:04 AM] (noreply) HDFS-15464: ViewFsOverloadScheme should 
work when -fs option pointing to remote cluster without mount links (#2132). 
Contributed by Uma Maheswara Rao G.
[Jul 12, 2020 7:10:12 AM] (noreply) HDFS-15447 RBF: Add top real owners metrics 
for delegation tokens (#2110)




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-yarn-project 
   Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87] 
   Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At 
TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
 At TestTimelineReaderHBaseDown.java:[line 190] 

findbugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 

findbugs :

   module:hadoop-cloud-storage-p

[jira] [Created] (HDFS-15467) ObserverReadProxyProvider should skip logging first failover from each proxy

2020-07-13 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDFS-15467:
-

 Summary: ObserverReadProxyProvider should skip logging first 
failover from each proxy
 Key: HDFS-15467
 URL: https://issues.apache.org/jira/browse/HDFS-15467
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Hanisha Koneru


After HADOOP-17116, \{{RetryInvocationHandler}} skips logging the first 
failover INFO message from each proxy. But {{ObserverReadProxyProvider}} uses 
{{combinedProxy}} object which combines all proxies into one and assigns 
{{combinedInfo}} as the ProxyInfo.
{noformat}
ObserverReadProxyProvider# Lines 197-207:

for (int i = 0; i < nameNodeProxies.size(); i++) {
  if (i > 0) {
combinedInfo.append(",");
  }
  combinedInfo.append(nameNodeProxies.get(i).proxyInfo);
}
combinedInfo.append(']');
T wrappedProxy = (T) Proxy.newProxyInstance(
ObserverReadInvocationHandler.class.getClassLoader(),
new Class[] {xface}, new ObserverReadInvocationHandler());
combinedProxy = new ProxyInfo<>(wrappedProxy, combinedInfo.toString()){noformat}
{{RetryInvocationHandler}} depends on the {{ProxyInfo}} to differentiate 
between proxies while checking if failover from that proxy happened before. And 
since combined proxy has only 1 proxy, HADOOP-17116 doesn't work on 
{{ObserverReadProxyProvider.}}It would need to handled separately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15468) Active namenode crashed when no edit recover

2020-07-13 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HDFS-15468:
-

 Summary: Active namenode crashed when no edit recover
 Key: HDFS-15468
 URL: https://issues.apache.org/jira/browse/HDFS-15468
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy


 if namenode is under safe mode and let restart two journal node for 
maintenance activity.
 
In this case, the journal node will not finalize the last edit segment which is 
edit in-progress.
 
This last edit segment will be finalized or recovered when edit rolling 
operation else when epoch change due to namenode failover.
 
But the current scenario is no failover, just namenode is under safe mode. If 
we leave the safe mode then active namenode will crash.
 
Ie.
the current open segment is edits_inprogress_10356376710 but it is not 
recovered or finalized post JN2 restart. I think we need to recover the edits 
after JN restart. 
 
 
 
{code:java}
Journal node 
2020-06-20 16:11:53,458 INFO  server.Journal 
(Journal.java:scanStorageForLatestEdits(193)) - Latest log is 
EditLogFile(file=/hadoop/hdfs/journal/PRODNNHA/current/edits_inprogress_10356376710,first=10356376710,last=10356376710,inProgress=true,hasCorruptHeader=false)
2020-06-20 16:19:06,397 INFO  ipc.Server (Server.java:logException(2435)) - IPC 
Server handler 3 on 8485, call 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.journal from 
10.x.x.x:28444 Call#49083225 Retry#0
org.apache.hadoop.hdfs.qjournal.protocol.JournalOutOfSyncException: Can't 
write, no segment open
        at 
org.apache.hadoop.hdfs.qjournal.server.Journal.checkSync(Journal.java:484)
{code}
{code}
{code:java}
Namenode log:
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions 
to achieve quorum size 2/3. 1 successful responses:
10.x.x.x:8485: null [success]
2 exceptions thrown:
10.y.y.y:8485: Can't write, no segment open
{code}
 
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.0 - RC0

2020-07-13 Thread Eric Badger
+1 (binding)

- Built from source on RHEL 7.6
- Deployed on a single-node cluster
- Verified DefaultContainerRuntime
- Verified RuncContainerRuntime (after setting things up with the
docker-to-squash tool available on YARN-9564)

Eric


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-13 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/203/

[Jul 13, 2020 6:12:48 AM] (Xiaoqiao He) HDFS-14498 LeaseManager can loop 
forever on the file for which create has failed. Contributed by Stephen 
O'Donnell.
[Jul 13, 2020 12:30:02 PM] (Steve Loughran) HDFS-13934. Multipart uploaders to 
be created through FileSystem/FileContext.
[Jul 13, 2020 6:07:48 PM] (noreply) HADOOP-17105. S3AFS - Do not attempt to 
resolve symlinks in globStatus (#2113)
[Jul 13, 2020 6:57:50 PM] (ericp) YARN-10297. 
TestContinuousScheduling#testFairSchedulerContinuousSchedulingInitTime fails 
intermittently. Contributed by Jim Brennan (Jim_Brennan)
[Jul 13, 2020 7:55:34 PM] (Hanisha Koneru) HADOOP-17116. Skip Retry INFO 
logging on first failover from a proxy
[Jul 13, 2020 11:10:39 PM] (Eric Badger) YARN-10348. Allow RM to always cancel 
tokens after app completes. Contributed by


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org