This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new a89536e5f [SPARK-39512] Document docker image release steps (#400)
a89536e5f is described below

commit a89536e5fc4498945149de3e0fb3ec8dc456b908
Author: Holden Karau <hol...@pigscanfly.ca>
AuthorDate: Mon Jul 18 18:27:13 2022 -0700

    [SPARK-39512] Document docker image release steps (#400)
    
    Document the docker image release steps for the release manager to follow 
when finalizing the release.
---
 release-process.md        | 18 ++++++++++++++++--
 site/release-process.html | 15 +++++++++++++--
 2 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/release-process.md b/release-process.md
index 7c1e9ff2d..e8bba053f 100644
--- a/release-process.md
+++ b/release-process.md
@@ -35,6 +35,9 @@ If you are a new Release Manager, you can read up on the 
process from the follow
 - gpg for signing https://www.apache.org/dev/openpgp.html
 - svn https://www.apache.org/dev/version-control.html#https-svn
 
+
+You should also get access to the ASF Dockerhub. You can get access by filing 
a INFRA JIRA ticket (see an example ticket 
https://issues.apache.org/jira/browse/INFRA-21282 ).
+
 <h3>Preparing gpg key</h3>
 
 You can skip this section if you have already uploaded your key.
@@ -175,10 +178,11 @@ To cut a release candidate, there are 4 steps:
 1. Package the release binaries & sources, and upload them to the Apache 
staging SVN repo.
 1. Create the release docs, and upload them to the Apache staging SVN repo.
 1. Publish a snapshot to the Apache staging Maven repo.
+1. Create a RC docker image tag (e.g. `3.4.0-rc1`)
 
-The process of cutting a release candidate has been automated via the 
`dev/create-release/do-release-docker.sh` script.
+The process of cutting a release candidate has been mostly automated via the 
`dev/create-release/do-release-docker.sh` script.
 Run this script, type information it requires, and wait until it finishes. You 
can also do a single step via the `-s` option.
-Please run `do-release-docker.sh -h` and see more details.
+Please run `do-release-docker.sh -h` and see more details. It does not 
currently generate the RC docker image tag.
 
 <h3>Call a vote on the release candidate</h3>
 
@@ -387,6 +391,16 @@ $ git shortlog v1.1.1 --grep "$EXPR" > contrib.txt
 $ git log v1.1.1 --grep "$expr" --shortstat --oneline | grep -B 1 -e 
"[3-9][0-9][0-9] insert" -e "[1-9][1-9][1-9][1-9] insert" | grep SPARK > 
large-patches.txt
 ```
 
+<h4>Create and upload Spark Docker Images</h4>
+
+The Spark docker images are created using the `./bin/docker-image-tool.sh` 
that is included in the release artifacts.
+
+
+You should install `docker buildx` so that you can cross-compile for multiple 
archs as ARM is becoming increasing popular. If you have access to both an ARM 
and an x86 machine you should set up a [remote builder as described 
here](https://scalingpythonml.com/2020/12/11/some-sharp-corners-with-docker-buildx.html),
 but if you only have one [docker buildx with QEMU works fine as we don't use 
cgo](https://docs.docker.com/buildx/working-with-buildx/).
+
+
+Once you have your cross-platform docker build environment setup, extract the 
build artifact (e.g. `tar -xvf spark-3.3.0-bin-hadoop3.tgz`), go into the 
directory (e.g. `cd spark-3.3.0-bin-hadoop3`) and build the containers and 
publish them to the Spark dockerhub (e.g. `./bin/docker-image-tool.sh -r 
docker.io/apache -p ./kubernetes/dockerfiles/spark/bindings/python/Dockerfile 
-t v3.3.0 -X -b java_image_tag=11-jre-slim build`)
+
 <h4>Create an announcement</h4>
 
 Once everything is working (website docs, website changes) create an 
announcement on the website
diff --git a/site/release-process.html b/site/release-process.html
index 236dd4631..dbecf8ff3 100644
--- a/site/release-process.html
+++ b/site/release-process.html
@@ -163,6 +163,8 @@
   <li>svn https://www.apache.org/dev/version-control.html#https-svn</li>
 </ul>
 
+<p>You should also get access to the ASF Dockerhub. You can get access by 
filing a INFRA JIRA ticket (see an example ticket 
https://issues.apache.org/jira/browse/INFRA-21282 ).</p>
+
 <h3>Preparing gpg key</h3>
 
 <p>You can skip this section if you have already uploaded your key.</p>
@@ -295,11 +297,12 @@ Note that not all permutations are run on PR therefore it 
is important to check
   <li>Package the release binaries &amp; sources, and upload them to the 
Apache staging SVN repo.</li>
   <li>Create the release docs, and upload them to the Apache staging SVN 
repo.</li>
   <li>Publish a snapshot to the Apache staging Maven repo.</li>
+  <li>Create a RC docker image tag (e.g. <code class="language-plaintext 
highlighter-rouge">3.4.0-rc1</code>)</li>
 </ol>
 
-<p>The process of cutting a release candidate has been automated via the <code 
class="language-plaintext 
highlighter-rouge">dev/create-release/do-release-docker.sh</code> script.
+<p>The process of cutting a release candidate has been mostly automated via 
the <code class="language-plaintext 
highlighter-rouge">dev/create-release/do-release-docker.sh</code> script.
 Run this script, type information it requires, and wait until it finishes. You 
can also do a single step via the <code class="language-plaintext 
highlighter-rouge">-s</code> option.
-Please run <code class="language-plaintext 
highlighter-rouge">do-release-docker.sh -h</code> and see more details.</p>
+Please run <code class="language-plaintext 
highlighter-rouge">do-release-docker.sh -h</code> and see more details. It does 
not currently generate the RC docker image tag.</p>
 
 <h3>Call a vote on the release candidate</h3>
 
@@ -497,6 +500,14 @@ $ git shortlog v1.1.1 --grep "$EXPR" &gt; contrib.txt
 $ git log v1.1.1 --grep "$expr" --shortstat --oneline | grep -B 1 -e 
"[3-9][0-9][0-9] insert" -e "[1-9][1-9][1-9][1-9] insert" | grep SPARK &gt; 
large-patches.txt
 </code></pre></div></div>
 
+<h4>Create and upload Spark Docker Images</h4>
+
+<p>The Spark docker images are created using the <code 
class="language-plaintext highlighter-rouge">./bin/docker-image-tool.sh</code> 
that is included in the release artifacts.</p>
+
+<p>You should install <code class="language-plaintext 
highlighter-rouge">docker buildx</code> so that you can cross-compile for 
multiple archs as ARM is becoming increasing popular. If you have access to 
both an ARM and an x86 machine you should set up a <a 
href="https://scalingpythonml.com/2020/12/11/some-sharp-corners-with-docker-buildx.html";>remote
 builder as described here</a>, but if you only have one <a 
href="https://docs.docker.com/buildx/working-with-buildx/";>docker buildx with Q 
[...]
+
+<p>Once you have your cross-platform docker build environment setup, extract 
the build artifact (e.g. <code class="language-plaintext highlighter-rouge">tar 
-xvf spark-3.3.0-bin-hadoop3.tgz</code>), go into the directory (e.g. <code 
class="language-plaintext highlighter-rouge">cd spark-3.3.0-bin-hadoop3</code>) 
and build the containers and publish them to the Spark dockerhub (e.g. <code 
class="language-plaintext highlighter-rouge">./bin/docker-image-tool.sh -r 
docker.io/apache -p ./kuber [...]
+
 <h4>Create an announcement</h4>
 
 <p>Once everything is working (website docs, website changes) create an 
announcement on the website


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to