For more details, see
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/449/
[Sep 18, 2019 5:31:33 PM] (ekrogen) HDFS-14569. Result of crypto -listZones is
not formatted properly.
[Sep 18, 2019 9:51:21 PM] (kihwal) HDFS-13959.
TestUpgradeDomainBlockPlacementPolicy is flaky.
+1.
Thanks,
Junping
Elek, Marton 于2019年9月17日周二 下午5:48写道:
>
>
> TLDR; I propose to move Ozone related code out from Hadoop trunk and
> store it in a separated *Hadoop* git repository apache/hadoop-ozone.git
>
>
>
>
> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
> be
+1 (non-binding)
- built from the source on centos7
- verified signatures & sha512 checksums
- verified basic HDFS operations with CLI
As Steve already mentioned, we must mention about Guava.
Thanks for organizing the release Rohit.
Thanks,
Dinesh
On Wed, Sep 18, 2019 at 2:20 PM Steve
On Wed, Sep 18, 2019 at 6:04 PM Rohith Sharma K S
wrote:
> Thanks Steve for detailed verification. Inline comment
>
> On Wed, 18 Sep 2019 at 20:34, Steve Loughran
> wrote:
>
> > >
> > > +1 binding.
> > >
> > > One caveat: warn people that guava is now at 27.0 -and that if you run
> > > with an
+1 (binding)
Thanks Rohith the work with the release.
* built from the source (archlinux)
* verified signatures
* verified sha512 checksums
* started a docker-based pseudo cluster
* tested basic HDFS operations with CLI
* Checked if the sources are uploaded to the maven staging repo
+0
* built from source taball.
* launched 3-nodes cluster.
* ran some examplr MR jobs.
* ran some CLI file operations.
Same as 3.1.3 RC0, verbose INFO message is emitted on data transfer
operations.::
$ bin/hadoop fs -put README.txt '/c;d/'
2019-09-18 17:41:24,977 INFO
Siddharth Seth created HADOOP-16586:
---
Summary: ITestS3GuardFsck fails when nun using a local metastore
Key: HADOOP-16586
URL: https://issues.apache.org/jira/browse/HADOOP-16586
Project: Hadoop
[
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Gabor Bota resolved HADOOP-16547.
-
Resolution: Fixed
> s3guard prune command doesn't get AWS auth chain from FS
>
+1
> On Sep 17, 2019, at 2:49 AM, Elek, Marton wrote:
>
>
>
> TLDR; I propose to move Ozone related code out from Hadoop trunk and store it
> in a separated *Hadoop* git repository apache/hadoop-ozone.git
>
>
>
>
> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
Thanks Steve for detailed verification. Inline comment
On Wed, 18 Sep 2019 at 20:34, Steve Loughran
wrote:
> >
> > +1 binding.
> >
> > One caveat: warn people that guava is now at 27.0 -and that if you run
> > with an older version of Guava things will inevitably break.
>
>: Could you
> one thing to consider here as you are giving up your ability to make
> changes in hadoop-* modules, including hadoop-common, and their
> dependencies, in sync with your own code. That goes for filesystem
contract
> tests.
>
> are you happy with that?
Yes. I think we can live with it.
>
> +1 binding.
>
> One caveat: warn people that guava is now at 27.0 -and that if you run
> with an older version of Guava things will inevitably break.
>
>
> steps to validate
> ==
>
> * downloaded src and binary artifacts
> * after import of KEYS and trusting Rohith's key,
Ranith Sardar created HADOOP-16585:
--
Summary: [Tool:NNloadGeneratorMR] Multiple threads are using same
id for creating file LoadGenerator#write
Key: HADOOP-16585
URL:
Thanks Rohith for driving the release.
+1 (non binding)
-Built from source on Ubuntu-18.04
-Successful Native build.
-Verified basic HDFS Commands.
-Verified basic Erasure Coding Commands.
-Verified basic RBF commands.
-Browsed HDFS UI.
Thanks
-Ayush
On Wed, 18 Sep 2019 at 15:41, Weiwei Yang
+1 (binding)
Downloaded tarball, setup a pseudo cluster manually
Verified basic HDFS operations, copy/view files
Verified basic YARN operations, run sample DS jobs
Verified basic YARN restful APIs, e.g cluster/nodes info etc
Set and verified YARN node-attributes, including CLI
Thanks
Weiwei
On
one thing to consider here as you are giving up your ability to make
changes in hadoop-* modules, including hadoop-common, and their
dependencies, in sync with your own code. That goes for filesystem contract
tests.
are you happy with that?
On Tue, Sep 17, 2019 at 10:48 AM Elek, Marton wrote:
Hi Masatake,
My bad. I read it wrong. Yeah. You're right. The HDFS-14759 can be
backported to both Branch-3.2 and 3.1.
Let's see if there's any other blocker/critical issue come up, otherwise,
personally I don't prefer to run another RC1 and vote for this minor change.
BR,
Zhankun
On Wed, 18
+1 (non-binding)
On Wed, Sep 18, 2019 at 5:33 AM Weiwei Yang wrote:
> +1 (binding)
>
> Thanks
> Weiwei
>
> On Wed, Sep 18, 2019 at 6:35 AM Wangda Tan wrote:
>
> > +1 (binding).
> >
> > From my experiences of Submarine project, I think moving to a separate
> repo
> > helps.
> >
> > - Wangda
>
Hi Zhankun,
> Can you please help to provide a branch-3.1 patch for HDFS-14759? Or
we can
> move it to the next release of branch-3.1 since the noisy info is not a
> blocker issue to me. Does that make sense?
I tried to cherry-picking the HDFS-14759 on my side.
Since there was no conflict, I
Hi Masatake,
Thanks for helping to verify!
I checked that branch-3.2 has the HDFS-14759 committed already.
Release-3.2.1-RC0 should have no such issue.
For branch-3.1, cherry-pick the same commit has conflicts. I'm confirming
if we can fix it or there's a feasible plan to backport the whole
Thanks for putting this up, Zhankun Tang.
While I was testing the RC0 wich CLI,
noisy INFO message was emitted on every data transfer operation.::
2019-09-17 16:00:42,942 INFO sasl.SaslDataTransferClient: SASL
encryption trust check: localHostTrusted = false, remoteHostTrusted = false
The
21 matches
Mail list logo