update: got a stub project whose test run requires the client api and
shaded artifacts, and whose clean target wiil delete all artifacts of a
specific version from your local repo, so as to ensure all it untainted

https://github.com/steveloughran/validate-hadoop-client-artifacts

I do now have a version in staging with the files; still don't know how the
previous delete lacked them, I will do more testing tomorrow before putting
up the next RC. which will be off the same git commit as before, just a
rebuild, repackage and republish.

-Steve

On Tue, 10 May 2022 at 00:18, Steve Loughran <ste...@cloudera.com> wrote:

> I've done another docker build and the client jars appear to be there.
> I'll test tomorrow before putting up another vote. it will be exactly the
> same commit as before, just recompiled/republished
>
> On Mon, 9 May 2022 at 17:45, Chao Sun <sunc...@apache.org> wrote:
>
>> Agreed, that step #10 is out-dated and should be removed (I skipped
>> that when releasing Hadoop 3.3.2 but didn't update it, sorry).
>>
>> > How about using
>> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
>>
>> Akira, I tried this too but it didn't work. I think we'd need the
>> artifacts to be properly pushed to the staging repository.
>>
>> > Could you please let me know how I can consume the Hadoop 3 jars in
>> maven?
>>
>> Gautham (if you are following this thread), you'll need to add the
>> following:
>>
>>     <repository>
>>        <id>staged</id>
>>        <name>staged-releases</name>
>>        <url>https://repository.apache.org/content/repositories/staging/
>> </url>
>>        <releases>
>>          <enabled>true</enabled>
>>        </releases>
>>        <snapshots>
>>          <enabled>true</enabled>
>>        </snapshots>
>>      </repository>
>>
>> to the `<repositories>` section in your Maven pom.xml file.
>>
>> On Mon, May 9, 2022 at 8:52 AM Steve Loughran
>> <ste...@cloudera.com.invalid> wrote:
>> >
>> > I didn't do that as the docker image was doing it itself...I discussed
>> this
>> > with Akira and Ayush & they agreed. so whatever went wrong. it was
>> > something else.
>> >
>> > I have been building a list of things I'd like to change there; cutting
>> > that line was one of them. but I need to work out the correct workflow.
>> >
>> > trying again, and creating a stub module to verify the client is in
>> staging
>> >
>> > On Mon, 9 May 2022 at 15:19, Masatake Iwasaki <
>> iwasak...@oss.nttdata.co.jp>
>> > wrote:
>> >
>> > > It seems to be caused by obsolete instruction in HowToRelease Wiki?
>> > >
>> > > After HADOOP-15058, `mvn deploy` is kicked by
>> > > `dev-support/bin/create-release --asfrelease`.
>> > > https://issues.apache.org/jira/browse/HADOOP-15058
>> > >
>> > > Step #10 in "Creating the release candidate (X.Y.Z-RC<N>)" section
>> > > of the Wiki still instructs to run `mvn deploy` with `-DskipShade`.
>> > >
>> > > 2 sets of artifact are deployed after creating RC based on the
>> instruction.
>> > > The latest one contains empty shaded jars.
>> > >
>> > > hadoop-client-api and hadoop-client-runtime of already released 3.2.3
>> > > looks having same issue...
>> > >
>> > > Masatake Iwasaki
>> > >
>> > > On 2022/05/08 6:45, Akira Ajisaka wrote:
>> > > > Hi Chao,
>> > > >
>> > > > How about using
>> > > >
>> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
>> > > > instead of
>> https://repository.apache.org/content/repositories/staging/ ?
>> > > >
>> > > > Akira
>> > > >
>> > > > On Sat, May 7, 2022 at 10:52 AM Ayush Saxena <ayush...@gmail.com>
>> wrote:
>> > > >
>> > > >> Hmm, I see the artifacts ideally should have got overwritten by
>> the new
>> > > >> RC, but they didn’t. The reason seems like the staging path shared
>> > > doesn’t
>> > > >> have any jars…
>> > > >> That is why it was picking the old jars. I think Steve needs to
>> run mvn
>> > > >> deploy again…
>> > > >>
>> > > >> Sent from my iPhone
>> > > >>
>> > > >>> On 07-May-2022, at 7:12 AM, Chao Sun <sunc...@apache.org> wrote:
>> > > >>>
>> > > >>> 
>> > > >>>>
>> > > >>>> Chao can you use the one that Steve mentioned in the mail?
>> > > >>>
>> > > >>> Hmm how do I do that? Typically after closing the RC in nexus the
>> > > >>> release bits will show up in
>> > > >>>
>> > > >>
>> > >
>> https://repository.apache.org/content/repositories/staging/org/apache/hadoop
>> > > >>> and Spark build will be able to pick them up for testing. However
>> in
>> > > >>> this case I don't see any 3.3.3 jars in the URL.
>> > > >>>
>> > > >>>> On Fri, May 6, 2022 at 6:24 PM Ayush Saxena <ayush...@gmail.com>
>> > > wrote:
>> > > >>>>
>> > > >>>> There were two 3.3.3 staged. The earlier one was with skipShade,
>> the
>> > > >> date was also april 22, I archived that. Chao can you use the one
>> that
>> > > >> Steve mentioned in the mail?
>> > > >>>>
>> > > >>>>> On Sat, 7 May 2022 at 06:18, Chao Sun <sunc...@apache.org>
>> wrote:
>> > > >>>>>
>> > > >>>>> Seems there are some issues with the shaded client as I was not
>> able
>> > > >>>>> to compile Apache Spark with the RC
>> > > >>>>> (https://github.com/apache/spark/pull/36474). Looks like it's
>> > > compiled
>> > > >>>>> with the `-DskipShade` option and the hadoop-client-api JAR
>> doesn't
>> > > >>>>> contain any class:
>> > > >>>>>
>> > > >>>>> ➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
>> > > >>>>> META-INF/
>> > > >>>>> META-INF/MANIFEST.MF
>> > > >>>>> META-INF/NOTICE.txt
>> > > >>>>> META-INF/LICENSE.txt
>> > > >>>>> META-INF/maven/
>> > > >>>>> META-INF/maven/org.apache.hadoop/
>> > > >>>>> META-INF/maven/org.apache.hadoop/hadoop-client-api/
>> > > >>>>> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
>> > > >>>>>
>> META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties
>> > > >>>>>
>> > > >>>>> On Fri, May 6, 2022 at 4:24 PM Stack <st...@duboce.net> wrote:
>> > > >>>>>>
>> > > >>>>>> +1 (binding)
>> > > >>>>>>
>> > > >>>>>>   * Signature: ok
>> > > >>>>>>   * Checksum : passed
>> > > >>>>>>   * Rat check (1.8.0_191): passed
>> > > >>>>>>    - mvn clean apache-rat:check
>> > > >>>>>>   * Built from source (1.8.0_191): failed
>> > > >>>>>>    - mvn clean install  -DskipTests
>> > > >>>>>>    - mvn -fae --no-transfer-progress -DskipTests
>> > > >> -Dmaven.javadoc.skip=true
>> > > >>>>>> -Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
>> > > >>>>>> -Drequire.zstd -Drequire.test.libhadoop clean install
>> > > >>>>>>   * Unit tests pass (1.8.0_191):
>> > > >>>>>>     - HDFS Tests passed (Didn't run more than this).
>> > > >>>>>>
>> > > >>>>>> Deployed a ten node ha hdfs cluster with three namenodes and
>> five
>> > > >>>>>> journalnodes. Ran a ten node hbase (older version of 2.5 branch
>> > > built
>> > > >>>>>> against 3.3.2) against it. Tried a small verification job.
>> Good.
>> > > Ran a
>> > > >>>>>> bigger job with mild chaos. All seems to be working properly
>> > > >> (recoveries,
>> > > >>>>>> logs look fine). Killed a namenode. Failover worked promptly.
>> UIs
>> > > look
>> > > >>>>>> good. Poked at the hdfs cli. Seems good.
>> > > >>>>>>
>> > > >>>>>> S
>> > > >>>>>>
>> > > >>>>>> On Tue, May 3, 2022 at 4:24 AM Steve Loughran
>> > > >> <ste...@cloudera.com.invalid>
>> > > >>>>>> wrote:
>> > > >>>>>>
>> > > >>>>>>> I have put together a release candidate (rc0) for Hadoop 3.3.3
>> > > >>>>>>>
>> > > >>>>>>> The RC is available at:
>> > > >>>>>>> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
>> > > >>>>>>>
>> > > >>>>>>> The git tag is release-3.3.3-RC0, commit d37586cbda3
>> > > >>>>>>>
>> > > >>>>>>> The maven artifacts are staged at
>> > > >>>>>>>
>> > > >>
>> > >
>> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
>> > > >>>>>>>
>> > > >>>>>>> You can find my public key at:
>> > > >>>>>>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>> > > >>>>>>>
>> > > >>>>>>> Change log
>> > > >>>>>>>
>> > > https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md
>> > > >>>>>>>
>> > > >>>>>>> Release notes
>> > > >>>>>>>
>> > > >>
>> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md
>> > > >>>>>>>
>> > > >>>>>>> There's a very small number of changes, primarily critical
>> > > >> code/packaging
>> > > >>>>>>> issues and security fixes.
>> > > >>>>>>>
>> > > >>>>>>>
>> > > >>>>>>>    - The critical fixes which shipped in the 3.2.3 release.
>> > > >>>>>>>    -  CVEs in our code and dependencies
>> > > >>>>>>>    - Shaded client packaging issues.
>> > > >>>>>>>    - A switch from log4j to reload4j
>> > > >>>>>>>
>> > > >>>>>>>
>> > > >>>>>>> reload4j is an active fork of the log4j 1.17 library with the
>> > > >> classes which
>> > > >>>>>>> contain CVEs removed. Even though hadoop never used those
>> classes,
>> > > >> they
>> > > >>>>>>> regularly raised alerts on security scans and concen from
>> users.
>> > > >> Switching
>> > > >>>>>>> to the forked project allows us to ship a secure logging
>> framework.
>> > > >> It will
>> > > >>>>>>> complicate the builds of downstream maven/ivy/gradle projects
>> which
>> > > >> exclude
>> > > >>>>>>> our log4j artifacts, as they need to cut the new dependency
>> > > >> instead/as
>> > > >>>>>>> well.
>> > > >>>>>>>
>> > > >>>>>>> See the release notes for details.
>> > > >>>>>>>
>> > > >>>>>>> This is my first release through the new docker build
>> process, do
>> > > >> please
>> > > >>>>>>> validate artifact signing &c to make sure it is good. I'll be
>> > > trying
>> > > >> builds
>> > > >>>>>>> of downstream projects.
>> > > >>>>>>>
>> > > >>>>>>> We know there are some outstanding issues with at least one
>> library
>> > > >> we are
>> > > >>>>>>> shipping (okhttp), but I don't want to hold this release up
>> for it.
>> > > >> If the
>> > > >>>>>>> docker based release process works smoothly enough we can do a
>> > > >> followup
>> > > >>>>>>> security release in a few weeks.
>> > > >>>>>>>
>> > > >>>>>>> Please try the release and vote. The vote will run for 5 days.
>> > > >>>>>>>
>> > > >>>>>>> -Steve
>> > > >>>>>>>
>> > > >>>>>
>> > > >>>>>
>> ---------------------------------------------------------------------
>> > > >>>>> To unsubscribe, e-mail:
>> common-dev-unsubscr...@hadoop.apache.org
>> > > >>>>> For additional commands, e-mail:
>> common-dev-h...@hadoop.apache.org
>> > > >>>>>
>> > > >>
>> > > >>
>> ---------------------------------------------------------------------
>> > > >> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
>> > > >> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>> > > >>
>> > > >>
>> > > >
>> > >
>> > > ---------------------------------------------------------------------
>> > > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>> > >
>> > >
>>
>

Reply via email to