Repository: spark-website
Updated Branches:
  refs/heads/asf-site a6155a89d -> 6634f88ab


Update the release process documentation on the basis of having a new person 
run through.
In addition to documentation some previously undocumented steps and updating 
some deprecated parts, this changes
the recommended build to be on the RM's machine to allow individual key signing 
until the Jenkins process is updated.

Update the release docs based on my initial looking at jenkins

Mention how to configure the jobs

Regenerate release process doc

Fix sentence fragment

Regenerate release process doc

The version information is taken care of by the jenkins scripts

Re-build release process doc with change

Update release process to describe rolling a release by hand, also switch from 
scp to sftp since scp is now disabled on people.apache

Update release process description more

Update with CR feedback

Update corresponding HTML


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/6634f88a
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/6634f88a
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/6634f88a

Branch: refs/heads/asf-site
Commit: 6634f88abf9a2485665956229afa420f528dd81c
Parents: a6155a8
Author: Holden Karau <hol...@us.ibm.com>
Authored: Tue Sep 12 14:49:23 2017 -0700
Committer: Holden Karau <hol...@us.ibm.com>
Committed: Tue Oct 17 22:49:06 2017 -0700

----------------------------------------------------------------------
 release-process.md        | 67 +++++++++++++++++++---------------------
 site/release-process.html | 70 ++++++++++++++++++++----------------------
 2 files changed, 66 insertions(+), 71 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark-website/blob/6634f88a/release-process.md
----------------------------------------------------------------------
diff --git a/release-process.md b/release-process.md
index a5a609e..f86ebaa 100644
--- a/release-process.md
+++ b/release-process.md
@@ -33,37 +33,6 @@ standard Git branching mechanism and should be announced to 
the community once t
 created. It is also good to set up Jenkins jobs for the release branch once it 
is cut to 
 ensure tests are passing (consult Josh Rosen and Shane Knapp for help with 
this).
 
-Next, ensure that all Spark versions are correct in the code base on the 
release branch (see 
-<a 
href="https://github.com/apache/spark/commit/01d233e4aede65ffa39b9d2322196d4b64186526";>this
 example commit</a>).
-You should grep through the codebase to find all instances of the version 
string. Some known 
-places to change are:
-
-- **SparkContext**. Search for VERSION (only for branch 1.x)
-- **Maven build**. Ensure that the version in all the `pom.xml` files is 
`<SPARK-VERSION>-SNAPSHOT` 
-(e.g. `1.1.1-SNAPSHOT`). This will be changed to `<SPARK-VERSION>` (e.g. 
1.1.1) automatically by 
-Maven when cutting the release. Note that there are a few exceptions that 
should just use 
-`<SPARK-VERSION>`. These modules are not published as artifacts.
-- **Spark REPLs**. Look for the Spark ASCII art in `SparkILoopInit.scala` for 
the Scala shell 
-and in `shell.py` for the Python REPL.
-- **Docs**. Search for VERSION in `docs/_config.yml`
-- **PySpark**. Search for `__version__` in `python/pyspark/version.py`
-- **SparkR**. Search for `Version` in `R/pkg/DESCRIPTION`
-
-Finally, update `CHANGES.txt` with this script in the Spark repository. 
`CHANGES.txt` captures 
-all the patches that have made it into this release candidate since the last 
release.
-
-```
-$ export SPARK_HOME=<your Spark home>
-$ cd spark
-# Update release versions
-$ vim dev/create-release/generate-changelist.py
-$ dev/create-release/generate-changelist.py
-```
-
-This produces a `CHANGES.txt.new` that should be a superset of the existing 
`CHANGES.txt`. 
-Replace the old `CHANGES.txt` with the new one (see 
-<a 
href="https://github.com/apache/spark/commit/131c62672a39a6f71f6834e9aad54b587237f13c";>this
 example commit</a>).
-
 <h3>Cutting a Release Candidate</h3>
 
 If this is not the first RC, then make sure that the JIRA issues that have 
been solved since the 
@@ -75,9 +44,37 @@ For example if you are cutting RC for 1.0.2, mark such 
issues as `FIXED` in 1.0.
 release, and change them to the current release.
 - Verify from `git log` whether they are actually making it in the new RC or 
not.
 
-The process of cutting a release candidate has been automated via the AMPLab 
Jenkins. There are 
-Jenkins jobs that can tag a release candidate and create various packages 
based on that candidate. 
-The recommended process is to ask the previous release manager to walk you 
through the Jenkins jobs.
+The process of cutting a release candidate has been partially automated via 
the AMPLab Jenkins. There are
+Jenkins jobs that can tag a release candidate and create various packages 
based on that candidate.
+
+
+At present the Jenkins jobs *SHOULD NOT BE USED* as they use a legacy shared 
key for signing.
+Instead much of the same release logic can be accessed in 
`dev/create-release/release-tag.sh` and `dev/create-release/release-build.sh`. 
The general order of creating a release using the scripts is:
+
+- Verify Jenkins test pass on your desired commit
+- Set the shell enviroment variables used by the scripts (run with "help" for 
details)
+- Verify your JAVA_HOME is set to the correct Java version (2.2+ Java 8, 
pre-2.2 Java 7)
+- You may find Felix's docker env useful - 
https://github.com/felixcheung/spark-build/blob/master/Dockerfile .
+- Ensure you have the required dependencies to build the docs `docs/README.md`
+- R, for CRAN packaging tests, requires e1071 to be installed as part of the 
packaging tests.
+- In addition R uses LaTeX for some things, and requires some additional 
fonts. On Debian based systems you may wish to install 
`texlive-fonts-recommended` and `texlive-fonts-extra`.
+- Make sure you required Python packages for packaging (see 
`dev/requirements.txt`)
+- Tag the release candidate with `dev/create-release/release-tag.sh` (e.g. for 
creating 2.1.2 RC2 we did `ASF_USERNAME=holden ASF_PASSWORD=yoursecretgoeshere 
GIT_NAME="Holden Karau" GIT_BRANCH=branch-2.1 GIT_EMAIL="hol...@us.ibm.com" 
RELEASE_VERSION=2.1.2 RELEASE_TAG=v2.1.2-rc2 NEXT_VERSION=2.1.3-SNAPSHOT 
./dev/create-release/release-tag.sh`)
+- Package the release binaries & sources with 
`dev/create-release/release-build.sh package`
+- Create the release docs with `dev/create-release/release-build.sh docs`
+- For Spark versions prior to 2.1.2, change the SPARK_VERSION from X.Y.Z to 
X.Y.Z-rcA then run `dev/create-release/release-build.sh publish-release`.
+- Publish a snapshot to the Apache release repo 
`dev/create-release/release-build.sh publish-release`
+
+
+If the Jenkins jobs have been updated to support signing with your key you can 
look at the job required for a release are located in the [Spark Release 
Jobs](https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/) collection.
+If you don't have access, talk to a previous release manager for guidance and 
to get access.
+The jobs can be launched with "Build with Parameters" and the general order is:
+
+- Create a tag for the current RC with 
[spark-release-tag](https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-tag/)
 job.
+- Kick off the rest of the jobs except spark-release-publish after the current 
RC has been configured.
+- Once the packaging and doc jobs have finished kick off the 
[spark-release-publish](https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-publish)
 job.
+
+The jobs are configured through build parameters. If the build parameters are 
unclear you can look at previous releases or if available, the recommended 
process is to ask the previous release manager to walk you through the Jenkins 
jobs as this document may not be 100% up to date.
 
 <h3>Call a Vote on the Release Candidate</h3>
 
@@ -113,7 +110,7 @@ $ svn co https://dist.apache.org/repos/dist/dev/spark/
 mkdir spark-1.1.1-rc2
  
 # Download the voted binaries and add them to the directory
-$ scp andrewo...@people.apache.org:~/public_html/spark-1.1.1-rc2/* 
spark-1.1.1-rc2
+$ sftp -r andrewo...@people.apache.org:~/public_html/spark-1.1.1-rc2/* 
spark-1.1.1-rc2
  
 # NOTE: Remove any binaries you don’t want to publish
 # E.g. never push MapR and *without-hive artifacts to apache

http://git-wip-us.apache.org/repos/asf/spark-website/blob/6634f88a/site/release-process.html
----------------------------------------------------------------------
diff --git a/site/release-process.html b/site/release-process.html
index e74e0f6..6261650 100644
--- a/site/release-process.html
+++ b/site/release-process.html
@@ -231,38 +231,6 @@ standard Git branching mechanism and should be announced 
to the community once t
 created. It is also good to set up Jenkins jobs for the release branch once it 
is cut to 
 ensure tests are passing (consult Josh Rosen and Shane Knapp for help with 
this).</p>
 
-<p>Next, ensure that all Spark versions are correct in the code base on the 
release branch (see 
-<a 
href="https://github.com/apache/spark/commit/01d233e4aede65ffa39b9d2322196d4b64186526";>this
 example commit</a>).
-You should grep through the codebase to find all instances of the version 
string. Some known 
-places to change are:</p>
-
-<ul>
-  <li><strong>SparkContext</strong>. Search for VERSION (only for branch 
1.x)</li>
-  <li><strong>Maven build</strong>. Ensure that the version in all the 
<code>pom.xml</code> files is <code>&lt;SPARK-VERSION&gt;-SNAPSHOT</code> 
-(e.g. <code>1.1.1-SNAPSHOT</code>). This will be changed to 
<code>&lt;SPARK-VERSION&gt;</code> (e.g. 1.1.1) automatically by 
-Maven when cutting the release. Note that there are a few exceptions that 
should just use 
-<code>&lt;SPARK-VERSION&gt;</code>. These modules are not published as 
artifacts.</li>
-  <li><strong>Spark REPLs</strong>. Look for the Spark ASCII art in 
<code>SparkILoopInit.scala</code> for the Scala shell 
-and in <code>shell.py</code> for the Python REPL.</li>
-  <li><strong>Docs</strong>. Search for VERSION in 
<code>docs/_config.yml</code></li>
-  <li><strong>PySpark</strong>. Search for <code>__version__</code> in 
<code>python/pyspark/version.py</code></li>
-  <li><strong>SparkR</strong>. Search for <code>Version</code> in 
<code>R/pkg/DESCRIPTION</code></li>
-</ul>
-
-<p>Finally, update <code>CHANGES.txt</code> with this script in the Spark 
repository. <code>CHANGES.txt</code> captures 
-all the patches that have made it into this release candidate since the last 
release.</p>
-
-<pre><code>$ export SPARK_HOME=&lt;your Spark home&gt;
-$ cd spark
-# Update release versions
-$ vim dev/create-release/generate-changelist.py
-$ dev/create-release/generate-changelist.py
-</code></pre>
-
-<p>This produces a <code>CHANGES.txt.new</code> that should be a superset of 
the existing <code>CHANGES.txt</code>. 
-Replace the old <code>CHANGES.txt</code> with the new one (see 
-<a 
href="https://github.com/apache/spark/commit/131c62672a39a6f71f6834e9aad54b587237f13c";>this
 example commit</a>).</p>
-
 <h3>Cutting a Release Candidate</h3>
 
 <p>If this is not the first RC, then make sure that the JIRA issues that have 
been solved since the 
@@ -276,9 +244,39 @@ release, and change them to the current release.</li>
   <li>Verify from <code>git log</code> whether they are actually making it in 
the new RC or not.</li>
 </ul>
 
-<p>The process of cutting a release candidate has been automated via the 
AMPLab Jenkins. There are 
-Jenkins jobs that can tag a release candidate and create various packages 
based on that candidate. 
-The recommended process is to ask the previous release manager to walk you 
through the Jenkins jobs.</p>
+<p>The process of cutting a release candidate has been partially automated via 
the AMPLab Jenkins. There are
+Jenkins jobs that can tag a release candidate and create various packages 
based on that candidate.</p>
+
+<p>At present the Jenkins jobs <em>SHOULD NOT BE USED</em> as they use a 
legacy shared key for signing.
+Instead much of the same release logic can be accessed in 
<code>dev/create-release/release-tag.sh</code> and 
<code>dev/create-release/release-build.sh</code>. The general order of creating 
a release using the scripts is:</p>
+
+<ul>
+  <li>Verify Jenkins test pass on your desired commit</li>
+  <li>Set the shell enviroment variables used by the scripts (run with 
&#8220;help&#8221; for details)</li>
+  <li>Verify your JAVA_HOME is set to the correct Java version (2.2+ Java 8, 
pre-2.2 Java 7)</li>
+  <li>You may find Felix&#8217;s docker env useful - 
https://github.com/felixcheung/spark-build/blob/master/Dockerfile .</li>
+  <li>Ensure you have the required dependencies to build the docs 
<code>docs/README.md</code></li>
+  <li>R, for CRAN packaging tests, requires e1071 to be installed as part of 
the packaging tests.</li>
+  <li>In addition R uses LaTeX for some things, and requires some additional 
fonts. On Debian based systems you may wish to install 
<code>texlive-fonts-recommended</code> and 
<code>texlive-fonts-extra</code>.</li>
+  <li>Make sure you required Python packages for packaging (see 
<code>dev/requirements.txt</code>)</li>
+  <li>Tag the release candidate with 
<code>dev/create-release/release-tag.sh</code> (e.g. for creating 2.1.2 RC2 we 
did <code>ASF_USERNAME=holden ASF_PASSWORD=yoursecretgoeshere GIT_NAME="Holden 
Karau" GIT_BRANCH=branch-2.1 GIT_EMAIL="hol...@us.ibm.com" 
RELEASE_VERSION=2.1.2 RELEASE_TAG=v2.1.2-rc2 NEXT_VERSION=2.1.3-SNAPSHOT 
./dev/create-release/release-tag.sh</code>)</li>
+  <li>Package the release binaries &amp; sources with 
<code>dev/create-release/release-build.sh package</code></li>
+  <li>Create the release docs with <code>dev/create-release/release-build.sh 
docs</code></li>
+  <li>For Spark versions prior to 2.1.2, change the SPARK_VERSION from X.Y.Z 
to X.Y.Z-rcA then run <code>dev/create-release/release-build.sh 
publish-release</code>.</li>
+  <li>Publish a snapshot to the Apache release repo 
<code>dev/create-release/release-build.sh publish-release</code></li>
+</ul>
+
+<p>If the Jenkins jobs have been updated to support signing with your key you 
can look at the job required for a release are located in the <a 
href="https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/";>Spark 
Release Jobs</a> collection.
+If you don&#8217;t have access, talk to a previous release manager for 
guidance and to get access.
+The jobs can be launched with &#8220;Build with Parameters&#8221; and the 
general order is:</p>
+
+<ul>
+  <li>Create a tag for the current RC with <a 
href="https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-tag/";>spark-release-tag</a>
 job.</li>
+  <li>Kick off the rest of the jobs except spark-release-publish after the 
current RC has been configured.</li>
+  <li>Once the packaging and doc jobs have finished kick off the <a 
href="https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-publish";>spark-release-publish</a>
 job.</li>
+</ul>
+
+<p>The jobs are configured through build parameters. If the build parameters 
are unclear you can look at previous releases or if available, the recommended 
process is to ask the previous release manager to walk you through the Jenkins 
jobs as this document may not be 100% up to date.</p>
 
 <h3>Call a Vote on the Release Candidate</h3>
 
@@ -315,7 +313,7 @@ $ svn co https://dist.apache.org/repos/dist/dev/spark/
 mkdir spark-1.1.1-rc2
  
 # Download the voted binaries and add them to the directory
-$ scp andrewo...@people.apache.org:~/public_html/spark-1.1.1-rc2/* 
spark-1.1.1-rc2
+$ sftp -r andrewo...@people.apache.org:~/public_html/spark-1.1.1-rc2/* 
spark-1.1.1-rc2
  
 # NOTE: Remove any binaries you don’t want to publish
 # E.g. never push MapR and *without-hive artifacts to apache


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to