This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/hadoop-release-support.git


The following commit(s) were added to refs/heads/main by this push:
     new 797ff51  HADOOP-19087 3.4.1 RC2 validation
797ff51 is described below

commit 797ff5127fb06cc5a4f35804087cb3ce1f9eb37d
Author: Steve Loughran <ste...@cloudera.com>
AuthorDate: Fri Sep 27 19:14:36 2024 +0100

    HADOOP-19087 3.4.1 RC2 validation
---
 README.md                                  |  83 +++++++++------------
 build.xml                                  | 114 +++++++++++++++++++++++++++--
 doc/thirdparty.md                          |  75 +++++++++++++++++++
 src/releases/release-info-3.4.1.properties |   8 +-
 4 files changed, 222 insertions(+), 58 deletions(-)

diff --git a/README.md b/README.md
index 374b564..969718e 100644
--- a/README.md
+++ b/README.md
@@ -104,7 +104,7 @@ Ant uses this to to set the property `release.info.file` to 
the path
 `src/releases/release-info-${release.version}.properties`
 
 ```properties
-release.info.file=src/releases/release-info-3.4.0.prpoperties
+release.info.file=src/releases/release-info-3.4.1.prpoperties
 ```
 
 This is then loaded, with the build failing if it is not found.
@@ -478,8 +478,10 @@ This will import all the KEYS from
 then verify the signature of each downloaded file.
 
 If you don't yet trust the key of whoever signed the release then
-now is the time to use the keytool to declare that you trust them
--after performing whatever verification you consider is wise.
+1. In your keychain app: update the keys from the server, to see
+   if they've been signed by others.
+2. Perform whatever key verification you can and sign the key that
+   level -ideally push up the signature to the servers.
 
 ### untar source and build.
 
@@ -498,7 +500,8 @@ ant release.src.untar release.src.build
 ```bash
 ant release.site.untar release.site.validate
 ```
-
+Validation is pretty minimal; it just looks for the existence
+of index.html files in the site root and under api/.
 
 ### untar binary release
 
@@ -516,7 +519,7 @@ ant release.bin.commands
 ```
 
 This will fail on a platform where the native binaries don't load,
-unless the checknative command has been disabled.
+unless the `hadoop checknative` command has been disabled.
 
 This can be done in `build.properties`
 
@@ -528,6 +531,12 @@ check.native.binaries=false
 ant release.bin.commands -Dcheck.native.binaries=false
 ```
 
+If `check.native.binaries` is false, the `bin/hadoop checknative`
+is still executed, with the outcome printed (reporting a failure if
+the binaries are not present).
+
+The and build itself is successful.
+
 ## Testing ARM binaries
 
 ```bash
@@ -537,10 +546,14 @@ ant release.arm.commands
 
 # Testing on a remote server
 
-Currently the way to do this is to clone the hadoop-release
+Currently the way to do this is to clone this hadoop-release-support
 repository to the remote server and run the validation
 commands there.
 
+```sh
+git clone https://github.com/apache/hadoop-release-support.git
+```
+
 # Building and testing projects from the staged maven artifacts
 
 A lot of the targets build maven projects from the staged maven artifacts.
@@ -554,7 +567,12 @@ For this to work
    on their own branches.
 4. Some projects need java11 or later. 
 
-First, purge your maven repository of all hadoop- JAR files of the
+Some of these builds/tests are slow, but they can all be executed in parallel 
unless
+you are actually trying to transitively build components, such as run spark 
tests with
+the parquet artifact you build with the RC.
+If you find yourself doing this: you've just become a CI system without the 
automation.
+
+First, purge your maven repository of all `hadoop-` JAR files of the
 pending release version
 
 ```bash
@@ -576,7 +594,9 @@ To see the dependencies of the maven project:
 ant mvn-validate-dependencies
 ```
 
-This saves the output to the file `target/mvndeps.txt`.
+This saves the output to the file `target/mvndeps.txt` and explicitly
+checks for some known "forbidden" artifacts that must not be exported
+as transitive dependencies.
 
 Review this to make sure there are no unexpected artifacts coming in,
 
@@ -595,7 +615,6 @@ Note: this does not include the AWS V1 SDK `-Pextra` 
profile.
 [Big Data Interop](https://github.com/GoogleCloudPlatform/bigdata-interop).
 
 * This is java 11+ only.
-* currently only builds against AWS v1 SDK.
 
 Ideally, you should run the tests, or even better, run them before the RC is 
up for review.
 
@@ -639,21 +658,9 @@ This independent module tests the s3a, gcs and abfs 
connectors,
 and associated committers, through the spark RDD and SQL APIs.
 
 
-[cloud integration](https://github.com/hortonworks-spark/cloud-integration)
-```bash
-ant cloud-examples.build
-ant cloud-examples.test
-```
-
-
-The test run is fairly tricky to get running; don't try and do this while
-* MUST be java 11+
-* Must have `cloud.test.configuration.file` set to an XML conf file
-  declaring the auth credentials and stores to use for the target object stores
-  (s3a, abfs, gcs)
-
+## Build and test HBase HBoss filesystem
 
-## Build and test HBase filesystem
+*Hadoop 3.4.0 notes: the changes to test under v2 SDK aren't merged in; expect 
failure.*
 
 [hbase-filesystem](https://github.com/apache/hbase-filesystem.git)
 
@@ -666,20 +673,20 @@ Integration tests will go through S3A connector.
 ant hboss.build
 ```
 
-Hadoop 3.4.0 notes: the changes to test under v2 SDK aren't merged in; expect 
failure.
 
 ## Parquet build and test
 
 To clean build Apache Parquet:
+
 ```bash
-ant parquet.test
+ant parquet.build
 ```
 
 There's no profile for using ASF staging as a source for artifacts.
 Run this after the spark build so the files are already present.
 
 
-To clean build Apache Parquet and then run the tests in the parquet-hadoop 
module:
+To clean build Apache Parquet and then run the tests in the `parquet-hadoop` 
module:
 ```bash
 ant parquet.test
 ```
@@ -900,7 +907,7 @@ For some unknown reason the parquet build doesn't seem to 
cope.
 Drop the staged artifacts from nexus
  
[https://repository.apache.org/#stagingRepositories](https://repository.apache.org/#stagingRepositories)
 
-Delete the tag. Print out the delete command and then copy/paste it into a 
terminal in the hadoop repo
+Delete the tag. Print out the delete command and then copy/paste it into a 
terminal in the hadoop repository
 
 ```bash
 ant print-tag-command
@@ -923,28 +930,10 @@ ant stage-svn-rollback
 ant stage-svn-log
 ```
 
-# Releasing Hadoop Third party
-
-See wiki page [How To Release 
Hadoop-Thirdparty](https://cwiki.apache.org/confluence/display/HADOOP2/How+To+Release+Hadoop-Thirdparty)
-
-
-Support for this release workflow is pretty minimal, but releasing it is 
simpler
-
-* Update the branches and maven artifact versions
-* build/test. This can be done with the help of a PR to upgrade hadoop.
-* create the vote message.
+# Releasing Hadoop-thirdparty
 
-## Configuration options
 
-All options are prefixed `3p.`
-
-## Targets:
-
-All targets are prefixed `3p.`
-
-```
-3p.mvn-purge : remove all third party artifacts from the repo 
-```
+See [releasing Hadoop-thirdparty](doc/thirdparty.md)
 
 # Contributing to this module
 
diff --git a/build.xml b/build.xml
index 9b8780e..8edbb3d 100644
--- a/build.xml
+++ b/build.xml
@@ -158,6 +158,7 @@
     <set name="svn.apache.dist" value="https://dist.apache.org/"/>
     <set name="svn.staging.url" 
value="${svn.apache.dist}/repos/dist/dev/hadoop/${rc.dirname}"/>
     <set name="svn.production.url" 
value="${svn.apache.dist}/repos/dist/release/hadoop/common/${release}"/>
+
     <set name="tag.name" value="release-${rc.name}"/>
     <set name="production.commit.msg" value="${jira.id}. Releasing Hadoop 
${hadoop.version}" />
 
@@ -456,7 +457,7 @@
   <!--  Staging operations  -->
   <!-- ========================================================= -->
 
-  <target name="stage" depends="init"
+  <target name="stage" depends="staging-init"
     description="move the RC to the svn staging dir">
 
     <require p="staging.dir"/>
@@ -478,14 +479,19 @@
     </echo>
   </target>
 
+  <!--
+  Set up for staging releases
+  -->
   <target name="staging-init"
     description="init svn staging"
     depends="init">
+    <require p="staging.dir"/>
     <require p="jira.id"/>
     <require p="git.commit.id"/>
     <echo>
       staging.commit.msg = ${staging.commit.msg}
       production.commit.msg = ${production.commit.msg}
+      staging.dir = ${staging.dir}
       svn.staging.url = ${svn.staging.url}
       svn.production.url = ${svn.production.url}
     </echo>
@@ -1498,26 +1504,71 @@ ${arm.asc}
     <!-- and load the file it references.-->
     <loadproperties srcFile="${3p.release.info.file}"/>
 
-
-    <set name="3p.release" value="hadoop-thirdparty-${3p.version}"/>
+    <set name="3p.release" value="thirdparty-${3p.version}"/>
     <set name="3p.rc.name" value="${3p.version}-${3p.rc}"/>
     <set name="3p.rc.dirname" value="${3p.release}-${3p.rc}"/>
+
+    <!-- the actual staging/release paths are in the same svn repo as hadoop 
main -->
+    <set name="3p.svn.staging.url"
+      value="${svn.apache.dist}/repos/dist/dev/hadoop/${3p.rc.dirname}"/>
+    <set name="3p.svn.production.url"
+      
value="${svn.apache.dist}/repos/dist/release/hadoop/thirdparty/${3p.release}"/>
+    <setpath name="3p.staged.artifacts.dir" 
location="${staging.dir}/${3p.rc.dirname}"/>
+
+    <set name="3p.tag.name" value="release-${3p.rc.name}"/>
+
+    <set name="3p.staging.commit.msg"
+      value="${3p.jira.id}. Hadoop thirdparty ${3p.rc.name} built from 
${3p.git.commit.id}" />
+
+    <set name="3p.production.commit.msg"
+      value="${3p.jira.id}. Releasing Hadoop Thirdparty ${3p.version}" />
+
   </target>
 
 
-  <!-- Cut all hadoop-thirdparty -->
+  <!-- Cut all hadoop-thirdparty artifacts from the repo -->
 
-  <target name="3p.mvn-purge" depends="init"
+  <target name="3p.mvn-purge" depends="3p.init"
     description="purge all local hadoop-thirdparty ">
 
-
     <echo>
       deleting all hadoop-thirdparty artifacts
     </echo>
-    <delete dir="${hadoop.artifacts}/hadoop-thirdparty" />
+    <delete>
+      <fileset dir="${hadoop.artifacts}/hadoop-thirdparty"
+        includes="**/{3p.version}/*"/>
+    </delete>
 
   </target>
 
+  <target name="3p.print-tag-command"
+    description="print the git command to tag the rc"
+    depends="3p.init">
+    <require p="3p.git.commit.id"/>
+    <echo>
+      # command to tag the commit
+      git tag -s ${3p.tag.name} -m "Release candidate ${3p.rc.name}" 
${3p.git.commit.id}
+
+      # how to verify it
+      git tag -v ${3p.tag.name}
+
+      # how to view the log to make sure it really is the right commit
+      git log tags/${3p.tag.name}
+
+      # how to push to apache
+      git push apache ${3p.tag.name}
+
+      # if needed, how to delete locally
+      git tag -d ${3p.tag.name}
+
+      # if needed, how to delete it from apache
+      git push --delete apache ${3p.tag.name}
+
+      # tagging the final release
+      git tag -s rel/release-${3p.version} -m "${3p.jira.id}. Hadoop 
Thirdparty ${3p.version} release"
+      git push origin rel/release-${3p.version}
+    </echo>
+  </target>
 
   <target name="3p.vote-message"
     depends="3p.init"
@@ -1549,4 +1600,53 @@ Message is in file ${3p.message.out}
 
   </target>
 
+  <target name="3p.stage-svn-rollback"
+    description="rollback a thirdparty version staged to RC"
+    depends="3p.init">
+
+    <svn dir="${staging.dir}">
+      <arg value="update" />
+    </svn>
+    <svn dir="${staging.dir}">
+      <arg value="rm" />
+      <arg value="${3p.staged.artifacts.dir}" />
+    </svn>
+    <echo>Comitting with message ${staging.commit.msg}. Please wait</echo>
+    <svn dir="${staging.dir}">
+      <arg value="commit" />
+      <arg value="-m" />
+      <arg value="rolling back ${3p.staging.commit.msg}" />
+    </svn>
+  </target>
+
+
+  <target name="3p.stage-move-to-production"
+    description="promote the staged the thirdparty RC into dist"
+    depends="3p.init">
+
+    <svn dir="${staging.dir}">
+      <arg value="update" />
+    </svn>
+
+     <svn dir="${staging.dir}">
+      <arg value="info" />
+      <arg value="${svn.staging.url}" />
+    </svn>
+
+    <echo>Comitting with message ${3p.production.commit.msg}. Please 
wait</echo>
+
+    <svn dir="${staging.dir}">
+      <arg value="move" />
+      <arg value="${3p.svn.staging.url}" />
+      <arg value="${3p.svn.production.url}" />
+      <arg value="-m" />
+      <arg value="${3p.production.commit.msg}" />
+    </svn>
+    <svn dir="${staging.dir}">
+      <arg value="commit" />
+      <arg value="-m" />
+      <arg value="${3p.production.commit.msg}" />
+    </svn>
+  </target>
+
 </project>
diff --git a/doc/thirdparty.md b/doc/thirdparty.md
new file mode 100644
index 0000000..15b600b
--- /dev/null
+++ b/doc/thirdparty.md
@@ -0,0 +1,75 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+# Releasing Hadoop Third party
+
+See wiki page [How To Release 
Hadoop-Thirdparty](https://cwiki.apache.org/confluence/display/HADOOP2/How+To+Release+Hadoop-Thirdparty)
+
+
+Support for this release workflow is pretty minimal, but releasing it is 
simpler
+than a manual build.
+
+1. Update the branches and maven artifact versions
+2. build and test. This can be done with the help of a (draft) PR to upgrade 
hadoop from the RC.
+3. Create the vote message.
+
+## Configuration options
+
+All options are prefixed `3p.`; the rest of their name matches that
+of the core release.
+
+The core property is the version to release
+```properties
+3p.version=1.3.0
+```
+This will trigger the loading of the relevant file from
+`src/releases/3p/`:
+
+```
+src/releases/3p/3p-release-1.3.0.properties
+```
+It contains the options needed to create the vote message and execute the other
+targets in the build to validate the third party release
+
+```properties
+3p.rc=RC1
+3p.branch=https://github.com/apache/hadoop-thirdparty/commits/release-1.3.0-RC1
+3p.git.commit.id=0fd62903b071b5186f31b7030ce42e1c00f6bb6a 
+3p.jira.id=HADOOP-19252
+3p.nexus.staging.url=https://repository.apache.org/content/repositories/orgapachehadoop-1420
+3p.src.dir=https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.3.0-RC1
+3p.staging.url=https://dist.apache.org/repos/dist/dev/hadoop/hadoop-thirdparty-1.3.0-RC1
+3p.tag.name=release-1.3.0-RC1
+```
+
+## Targets:
+
+All targets are prefixed `3p.`
+
+| target            | function                                             |
+|-------------------|------------------------------------------------------|
+| `3p.mvn-purge`    | remove all third party artifacts from the local repo |
+| `3p.vote-message` | generate a vote message in target/3p.vote.txt        |
+| `3p.print-tag-command` | Print all the tag commands for a release |
+
+```
+ 3p.mvn-purge                     purge all local hadoop-thirdparty 
+ 3p.print-tag-command             print the git command to tag the rc
+ 3p.stage-move-to-production      promote the staged the thirdparty RC into 
dist
+ 3p.stage-svn-rollback            rollback a thirdparty version staged to RC
+ 3p.vote-message                  build the vote message
+```
+
+Third party artifacts must be staged to the same svn repository as for
+staging full hadoop releases, as set in `staging.dir`
diff --git a/src/releases/release-info-3.4.1.properties 
b/src/releases/release-info-3.4.1.properties
index 06f6510..aed2246 100644
--- a/src/releases/release-info-3.4.1.properties
+++ b/src/releases/release-info-3.4.1.properties
@@ -16,18 +16,18 @@
 
 # property file for 3.4.1
 hadoop.version=3.4.1
-rc=RC1
+rc=RC2
 previous.version=3.4.0
 release.branch=3.4.1
-git.commit.id=247daf0f827adc96a3847bb40e0fec3fc85f33bd
+git.commit.id=b3a4b582eeb729a0f48eca77121dd5e2983b2004
 
 jira.id=HADOOP-19087
 jira.title=Release 3.4.1
 
-amd.src.dir=https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC1
+amd.src.dir=https://dist.apache.org/repos/dist/dev/hadoop/hadoop-3.4.1-RC2/
 arm.src.dir=${amd.src.dir}
 http.source=${amd.src.dir}
-asf.staging.url=https://repository.apache.org/content/repositories/orgapachehadoop-1417
+asf.staging.url=https://repository.apache.org/content/repositories/orgapachehadoop-1426
 
 cloudstore.profile=sdk2
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org

Reply via email to