Thanks Steve. I see now that the branch cut was way back in October so I
definitely understand your frustration here!

This made me realize that HDFS-16832
<https://issues.apache.org/jira/browse/HDFS-16832>, which resolves a very
similar issue as the aforementioned HDFS-16923, is also missing from the
RC. I erroneously marked it with a fix version of 3.3.5 -- it was before
the initial 3.3.5 RC was made and I didn't notice the branch was cut. My
apologies for that. I've pushed both HDFS-16832 and HDFS-16932 to
branch-3.3.5, so they are ready if/when an RC3 is cut.

In the meantime, I tested for RC2 that a local cluster of NN + standby +
observer + QJM works as expected for some basic HDFS commands.

On Fri, Mar 3, 2023 at 2:52 AM Steve Loughran <ste...@cloudera.com.invalid>
wrote:

> shipping broken hdfs isn't something we'd want to do, but if we can be
> confident that all other issues can be addressed in RC3 then I'd be happy.
>
> On Fri, 3 Mar 2023 at 05:09, Ayush Saxena <ayush...@gmail.com> wrote:
>
> > I will highlight that I am completely fed up with doing this  release and
> >> really want to get it out the way -for which I depend on support from as
> >> many other developers as possible.
> >
> >
> > hmm, I can feel the pain. I tried to find if there is any config or any
> > workaround which can dodge this HDFS issue, but unfortunately couldn't
> find
> > any. If someone does a getListing with needLocation and the file doesn't
> > exist at Observer he is gonna get a NPE rather than a FNF, It isn't just
> > the exception, AFAIK Observer reads have some logic around handling FNF
> > specifically, that it attempts Active NN or something like that in such
> > cases, So, that will be broken as well for this use case.
> >
> > Now, there is no denying the fact there is an issue on the HDFS side, and
> > it has already been too much work on your side, so you can argue that it
> > might not be a very frequent use case or so. It's your call.
> >
> > Just sharing, no intentions of saying you should do that, But as an RM
> > "nobody" can force you for a new iteration of a RC, it is gonna be your
> > call and discretion. As far as I know a release can not be vetoed by
> > "anybody" as per the apache by laws.(
> > https://www.apache.org/legal/release-policy.html#release-approval). Even
> > our bylaws say that product release requires a Lazy Majority not a
> > Consensus Approval.
> >
> > So, you have a way out. You guys are 2 already and 1 I will give you a
> > pass, in case you are really in a state: ''Get me out of this mess"
> state,
> > my basic validations on x86 & Aarch64 both are passing as of now,
> couldn't
> > reach the end for any of the RC's
> >
> > -Ayush
> >
> > On Fri, 3 Mar 2023 at 08:41, Viraj Jasani <vjas...@apache.org> wrote:
> >
> >> While this RC is not going to be final, I just wanted to share the
> results
> >> of the testing I have done so far with RC1 and RC2.
> >>
> >> * Signature: ok
> >> * Checksum : ok
> >> * Rat check (1.8.0_341): ok
> >>  - mvn clean apache-rat:check
> >> * Built from source (1.8.0_341): ok
> >>  - mvn clean install  -DskipTests
> >> * Built tar from source (1.8.0_341): ok
> >>  - mvn clean package  -Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true
> >>
> >> * Built images using the tarball, installed and started all of Hdfs, JHS
> >> and Yarn components
> >> * Ran Hbase (latest 2.5) tests against Hdfs, ran RowCounter Mapreduce
> job
> >> * Hdfs CRUD tests
> >> * MapReduce wordcount job
> >>
> >> * Ran S3A tests with scale profile against us-west-2:
> >> mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale
> >>
> >> ITestS3AConcurrentOps#testParallelRename is timing out after ~960s. This
> >> is
> >> consistently failing, looks like a recent regression.
> >> I was also able to repro on trunk, will create Jira.
> >>
> >>
> >> On Mon, Feb 27, 2023 at 9:59 AM Steve Loughran
> >> <ste...@cloudera.com.invalid>
> >> wrote:
> >>
> >> > Mukund and I have put together a release candidate (RC2) for Hadoop
> >> 3.3.5.
> >> >
> >> > We need anyone who can to verify the source and binary artifacts,
> >> > including those JARs staged on maven, the site documentation and the
> >> arm64
> >> > tar file.
> >> >
> >> > The RC is available at:
> >> > https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC2/
> >> >
> >> > The git tag is release-3.3.5-RC2, commit 72f8c2a4888
> >> >
> >> > The maven artifacts are staged at
> >> >
> >>
> https://repository.apache.org/content/repositories/orgapachehadoop-1369/
> >> >
> >> > You can find my public key at:
> >> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >> >
> >> > Change log
> >> >
> >>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC2/CHANGELOG.md
> >> >
> >> > Release notes
> >> >
> >> >
> >>
> https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.3.5-RC2/RELEASENOTES.md
> >> >
> >> > This is off branch-3.3 and is the first big release since 3.3.2.
> >> >
> >> > As to what changed since the RC1 attempt last week
> >> >
> >> >
> >> >    1. Version fixup in JIRA (credit due to Takanobu Asanuma there)
> >> >    2. HADOOP-18470. Remove HDFS RBF text in the 3.3.5 index.md file
> >> >    3. Revert "HADOOP-18590. Publish SBOM artifacts (#5281)" (creating
> >> build
> >> >    issues in maven 3.9.0)
> >> >    4. HADOOP-18641. Cloud connector dependency and LICENSE fixup.
> >> (#5429)
> >> >
> >> >
> >> > Note, because the arm64 binaries are built separately on a different
> >> > platform and JVM, their jar files may not match those of the x86
> >> > release -and therefore the maven artifacts. I don't think this is
> >> > an issue (the ASF actually releases source tarballs, the binaries are
> >> > there for help only, though with the maven repo that's a bit blurred).
> >> >
> >> > The only way to be consistent would actually untar the x86.tar.gz,
> >> > overwrite its binaries with the arm stuff, retar, sign and push out
> >> > for the vote. Even automating that would be risky.
> >> >
> >> > Please try the release and vote. The vote will run for 5 days.
> >> >
> >> > Steve and Mukund
> >> >
> >>
> >
>

Reply via email to