[ 
https://issues.apache.org/jira/browse/BEAM-4498?focusedWorklogId=151332&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-151332
 ]

ASF GitHub Bot logged work on BEAM-4498:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 04/Oct/18 19:08
            Start Date: 04/Oct/18 19:08
    Worklog Time Spent: 10m 
      Work Description: swegner closed pull request #6556: [BEAM-4498] Add 
redirects and point javadoc/pydoc links to new location
URL: https://github.com/apache/beam/pull/6556
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/website/Rakefile b/website/Rakefile
index e814956451d..fc6fc0713c4 100644
--- a/website/Rakefile
+++ b/website/Rakefile
@@ -9,17 +9,14 @@ task :test do
       :connecttimeout => 40 },
     :allow_hash_href => true,
     :check_html => true,
-    :file_ignore => [/javadoc/, /v2/, /pydoc/],
+    :file_ignore => [/v2/],
     :url_ignore => [
-        # Javadocs and Pydocs are only available on asf-site branch
-        /documentation\/sdks\/javadoc/,
-        /documentation\/sdks\/pydoc/,
-
         /jstorm.io/,
         /datatorrent.com/,
         /ai.google/, # https://issues.apache.org/jira/browse/INFRA-16527
         /globenewswire.com/, # https://issues.apache.org/jira/browse/BEAM-5518
-        /www.se-radio.net/ # BEAM-5611: Can fail with rate limit HTTP 508 error
+        /www.se-radio.net/, # BEAM-5611: Can fail with rate limit HTTP 508 
error
+        /beam.apache.org\/releases/ # BEAM-4499 remove once publishing is 
migrated
     ],
     :parallel => { :in_processes => Etc.nprocessors },
     }).run
diff --git a/website/src/.htaccess b/website/src/.htaccess
index 06fc74b0c65..77dabf482be 100644
--- a/website/src/.htaccess
+++ b/website/src/.htaccess
@@ -13,3 +13,12 @@ RewriteCond %{HTTPS} !on
 # * Redirect (R) permanently (301) to https://beam.apache.org/,
 # * Stop processing more rules (L).
 RewriteRule ^(.*)$ https://beam.apache.org/$1 [L,R=301]
+
+# Javadocs / pydocs are available only on the published website, published from
+# https://github.com/apache/beam-site/tree/release-docs
+# They were previously hosted within this repository, and published at the URL
+# path /documentation/sdks/(javadoc|pydoc)/..
+# The following redirect maintains the previously supported URLs.
+RedirectMatch permanent "/documentation/sdks/(javadoc|pydoc)(.*)" 
"https://beam.apache.org/documentation/releases/$1$2";
+# Keep this updated to point to the current release.
+RedirectMatch "/releases/([^/]+)/current(.*)" 
"https://beam.apache.org/documentation/releases/$1/2.6.0$2";
diff --git a/website/src/_includes/section-menu/sdks.html 
b/website/src/_includes/section-menu/sdks.html
index 0102b4bd9b7..61e5f0cf84e 100644
--- a/website/src/_includes/section-menu/sdks.html
+++ b/website/src/_includes/section-menu/sdks.html
@@ -16,7 +16,7 @@
   <span class="section-nav-list-title">Java</span>
   <ul class="section-nav-list">
     <li><a href="{{ site.baseurl }}/documentation/sdks/java/">Java SDK 
overview</a></li>
-    <li><a href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest }}/" target="_blank">Java SDK API reference <img src="{{ 
site.baseurl }}/images/external-link-icon.png"
+    <li><a href="https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/" target="_blank">Java SDK API reference <img src="{{ 
site.baseurl }}/images/external-link-icon.png"
                                                                                
                                                    width="14" height="14"
                                                                                
                                                    alt="External link."></a>
     </li>
@@ -30,7 +30,7 @@
   <span class="section-nav-list-title">Python</span>
   <ul class="section-nav-list">
     <li><a href="{{ site.baseurl }}/documentation/sdks/python/">Python SDK 
overview</a></li>
-    <li><a href="{{ site.baseurl }}/documentation/sdks/pydoc/{{ 
site.release_latest }}/" target="_blank">Python SDK API reference <img src="{{ 
site.baseurl }}/images/external-link-icon.png"
+    <li><a href="https://beam.apache.org/releases/pydoc/{{ site.release_latest 
}}/" target="_blank">Python SDK API reference <img src="{{ site.baseurl 
}}/images/external-link-icon.png"
                                                                                
                                                    width="14" height="14"
                                                                                
                                                    alt="External link."></a>
     </li>
diff --git a/website/src/_posts/2016-10-20-test-stream.md 
b/website/src/_posts/2016-10-20-test-stream.md
index 876b4d7d8dc..be940e98ab1 100644
--- a/website/src/_posts/2016-10-20-test-stream.md
+++ b/website/src/_posts/2016-10-20-test-stream.md
@@ -73,7 +73,7 @@ be controlled within a test.
 ## Writing Deterministic Tests to Emulate Nondeterminism
 
 The Beam testing infrastructure provides the
-[PAssert]({{ site.baseurl }}/documentation/sdks/javadoc/{{ site.release_latest 
}}/org/apache/beam/sdk/testing/PAssert.html)
+[PAssert](https://beam.apache.org/releases/javadoc/{{ site.release_latest 
}}/org/apache/beam/sdk/testing/PAssert.html)
 methods, which assert properties about the contents of a PCollection from 
within
 a pipeline. We have expanded this infrastructure to include
 
[TestStream](https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/testing/TestStream.java),
diff --git a/website/src/_posts/2017-03-16-python-sdk-release.md 
b/website/src/_posts/2017-03-16-python-sdk-release.md
index c56449aff6f..443a00fe3b6 100644
--- a/website/src/_posts/2017-03-16-python-sdk-release.md
+++ b/website/src/_posts/2017-03-16-python-sdk-release.md
@@ -31,7 +31,7 @@ There are two runners capable of executing pipelines written 
with the Python SDK
 
 #### Try the Apache Beam Python SDK
 
-If you would like to try out the Python SDK, a good place to start is the 
[Quickstart]({{ site.baseurl }}/get-started/quickstart-py/). After that, you 
can take a look at additional 
[examples](https://github.com/apache/beam/tree/v0.6.0/sdks/python/apache_beam/examples),
 and deep dive into the [API reference]({{ site.baseurl 
}}/documentation/sdks/pydoc/).
+If you would like to try out the Python SDK, a good place to start is the 
[Quickstart]({{ site.baseurl }}/get-started/quickstart-py/). After that, you 
can take a look at additional 
[examples](https://github.com/apache/beam/tree/v0.6.0/sdks/python/apache_beam/examples),
 and deep dive into the [API 
reference](https://beam.apache.org/releases/pydoc/).
 
 Let’s take a look at a quick example together. First, install the 
`apache-beam` package from PyPI and start your Python interpreter.
 
diff --git a/website/src/_posts/2017-08-04-splittable-do-fn.md 
b/website/src/_posts/2017-08-04-splittable-do-fn.md
index 64a8363176c..39228256af5 100644
--- a/website/src/_posts/2017-08-04-splittable-do-fn.md
+++ b/website/src/_posts/2017-08-04-splittable-do-fn.md
@@ -85,24 +85,24 @@ has other limitations that make it insufficient for this 
task*).
 ## Beam Source API
 
 Apache Beam historically provides a Source API
-([BoundedSource]({{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/BoundedSource.html)
+([BoundedSource](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/BoundedSource.html)
 and
-[UnboundedSource]({{ site.baseurl }}/documentation/sdks/javadoc/{{
+[UnboundedSource](https://beam.apache.org/releases/javadoc/{{
 site.release_latest }}/org/apache/beam/sdk/io/UnboundedSource.html)) which does
 not have these limitations and allows development of efficient data sources for
 batch and streaming systems. Pipelines use this API via the
-[`Read.from(Source)`]({{ site.baseurl }}/documentation/sdks/javadoc/{{
+[`Read.from(Source)`](https://beam.apache.org/releases/javadoc/{{
 site.release_latest }}/org/apache/beam/sdk/io/Read.html) built-in `PTransform`.
 
 The Source API is largely similar to that of most other data processing
 frameworks, and allows the system to read data in parallel using multiple
 workers, as well as checkpoint and resume reading from an unbounded data 
source.
 Additionally, the Beam
-[`BoundedSource`]({{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/BoundedSource.html)
+[`BoundedSource`](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/BoundedSource.html)
 API provides advanced features such as progress reporting and [dynamic
 rebalancing]({{ site.baseurl }}/blog/2016/05/18/splitAtFraction-method.html)
 (which together enable autoscaling), and
-[`UnboundedSource`]({{ site.baseurl }}/documentation/sdks/javadoc/{{
+[`UnboundedSource`](https://beam.apache.org/releases/javadoc/{{
 site.release_latest }}/org/apache/beam/sdk/io/UnboundedSource.html) supports
 reporting the source's watermark and backlog *(until SDF, we believed that
 "batch" and "streaming" data sources are fundamentally different and thus
diff --git a/website/src/_posts/2018-08-20-review-input-streaming-connectors.md 
b/website/src/_posts/2018-08-20-review-input-streaming-connectors.md
index d3a9c9aebc3..09da93d92f1 100644
--- a/website/src/_posts/2018-08-20-review-input-streaming-connectors.md
+++ b/website/src/_posts/2018-08-20-review-input-streaming-connectors.md
@@ -54,7 +54,7 @@ Below are the main streaming input connectors for available 
for Beam and Spark D
    </td>
    <td>Local<br>(Using the <code>file://</code> URI)
    </td>
-   <td><a href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/TextIO.html">TextIO</a>
+   <td><a href="https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/TextIO.html">TextIO</a>
    </td>
    <td><a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/streaming/StreamingContext.html#textFileStream-java.lang.String-";>textFileStream</a><br>(Spark
 treats most Unix systems as HDFS-compatible, but the location should be 
accessible from all nodes)
    </td>
@@ -62,7 +62,7 @@ Below are the main streaming input connectors for available 
for Beam and Spark D
   <tr>
    <td>HDFS<br>(Using the <code>hdfs://</code> URI)
    </td>
-    <td><a href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/FileIO.html">FileIO</a> + <a 
href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ site.release_latest 
}}/org/apache/beam/sdk/io/hdfs/HadoopFileSystemOptions.html">HadoopFileSystemOptions</a>
+    <td><a href="https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/FileIO.html">FileIO</a> + <a 
href="https://beam.apache.org/releases/javadoc/{{ site.release_latest 
}}/org/apache/beam/sdk/io/hdfs/HadoopFileSystemOptions.html">HadoopFileSystemOptions</a>
    </td>
    <td><a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/streaming/util/HdfsUtils.html";>HdfsUtils</a>
    </td>
@@ -72,7 +72,7 @@ Below are the main streaming input connectors for available 
for Beam and Spark D
    </td>
    <td>Cloud Storage<br>(Using the <code>gs://</code> URI)
    </td>
-   <td><a href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/FileIO.html">FileIO</a> + <a 
href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ site.release_latest 
}}/org/apache/beam/sdk/extensions/gcp/options/GcsOptions.html">GcsOptions</a>
+   <td><a href="https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/FileIO.html">FileIO</a> + <a 
href="https://beam.apache.org/releases/javadoc/{{ site.release_latest 
}}/org/apache/beam/sdk/extensions/gcp/options/GcsOptions.html">GcsOptions</a>
    </td>
    <td rowspan="2" ><a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/SparkContext.html#hadoopConfiguration--";>hadoopConfiguration</a>
 and <a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/streaming/StreamingContext.html#textFileStream-java.lang.String-";>textFileStream</a>
@@ -81,7 +81,7 @@ and <a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/stre
   <tr>
    <td>S3<br>(Using the <code>s3://</code> URI)
    </td>
-    <td><a href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/FileIO.html">FileIO</a> + <a 
href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ site.release_latest 
}}/org/apache/beam/sdk/io/aws/options/S3Options.html">S3Options</a>
+    <td><a href="https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/FileIO.html">FileIO</a> + <a 
href="https://beam.apache.org/releases/javadoc/{{ site.release_latest 
}}/org/apache/beam/sdk/io/aws/options/S3Options.html">S3Options</a>
    </td>
   </tr>
   <tr>
@@ -89,7 +89,7 @@ and <a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/stre
    </td>
    <td>Kafka
    </td>
-   <td><a href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/kafka/KafkaIO.html">KafkaIO</a>
+   <td><a href="https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/io/kafka/KafkaIO.html">KafkaIO</a>
    </td>
    <td><a 
href="https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html";>spark-streaming-kafka</a>
    </td>
@@ -97,7 +97,7 @@ and <a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/stre
   <tr>
    <td>Kinesis
    </td>
-   <td><a href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest 
}}/org/apache/beam/sdk/io/kinesis/KinesisIO.html">KinesisIO</a>
+   <td><a href="https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/org/apache/beam/sdk/io/kinesis/KinesisIO.html">KinesisIO</a>
    </td>
    <td><a 
href="https://spark.apache.org/docs/latest/streaming-kinesis-integration.html";>spark-streaming-kinesis</a>
    </td>
@@ -105,7 +105,7 @@ and <a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/stre
   <tr>
    <td>Cloud Pub/Sub
    </td>
-   <td><a href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest 
}}/org/apache/beam/sdk/io/gcp/pubsub/PubsubIO.html">PubsubIO</a>
+   <td><a href="https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/org/apache/beam/sdk/io/gcp/pubsub/PubsubIO.html">PubsubIO</a>
    </td>
    <td><a 
href="https://github.com/apache/bahir/tree/master/streaming-pubsub";>spark-streaming-pubsub</a>
 from <a href="http://bahir.apache.org";>Apache Bahir</a>
    </td>
@@ -146,7 +146,7 @@ Below are the main streaming input connectors for available 
for Beam and Spark D
    </td>
    <td>Local
    </td>
-   <td><a href="{{ site.baseurl }}/documentation/sdks/pydoc/{{ 
site.release_latest }}/apache_beam.io.textio.html">io.textio</a>
+   <td><a href="https://beam.apache.org/releases/pydoc/{{ site.release_latest 
}}/apache_beam.io.textio.html">io.textio</a>
    </td>
    <td><a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.textFileStream";>textFileStream</a>
    </td>
@@ -154,7 +154,7 @@ Below are the main streaming input connectors for available 
for Beam and Spark D
   <tr>
    <td>HDFS
    </td>
-   <td><a href="{{ site.baseurl }}/documentation/sdks/pydoc/{{ 
site.release_latest 
}}/apache_beam.io.hadoopfilesystem.html">io.hadoopfilesystem</a>
+   <td><a href="https://beam.apache.org/releases/pydoc/{{ site.release_latest 
}}/apache_beam.io.hadoopfilesystem.html">io.hadoopfilesystem</a>
    </td>
    <td><a 
href="https://spark.apache.org/docs/latest/api/java/org/apache/spark/SparkContext.html#hadoopConfiguration--";>hadoopConfiguration</a>
 (Access through <code>sc._jsc</code> with Py4J)
 and <a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.textFileStream";>textFileStream</a>
@@ -165,7 +165,7 @@ and <a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.ht
    </td>
    <td>Google Cloud Storage
    </td>
-   <td><a href="{{ site.baseurl }}/documentation/sdks/pydoc/{{ 
site.release_latest }}/apache_beam.io.gcp.gcsio.html">io.gcp.gcsio</a>
+   <td><a href="https://beam.apache.org/releases/pydoc/{{ site.release_latest 
}}/apache_beam.io.gcp.gcsio.html">io.gcp.gcsio</a>
    </td>
    <td rowspan="2" ><a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.html#pyspark.streaming.StreamingContext.textFileStream";>textFileStream</a>
    </td>
@@ -197,7 +197,7 @@ and <a 
href="http://spark.apache.org/docs/latest/api/python/pyspark.streaming.ht
   <tr>
    <td>Cloud Pub/Sub
    </td>
-   <td><a href="{{ site.baseurl }}/documentation/sdks/pydoc/{{ 
site.release_latest }}/apache_beam.io.gcp.pubsub.html">io.gcp.pubsub</a>
+   <td><a href="https://beam.apache.org/releases/pydoc/{{ site.release_latest 
}}/apache_beam.io.gcp.pubsub.html">io.gcp.pubsub</a>
    </td>
    <td>N/A
    </td>
diff --git a/website/src/contribute/ptransform-style-guide.md 
b/website/src/contribute/ptransform-style-guide.md
index da04baff633..4cdcb7b9833 100644
--- a/website/src/contribute/ptransform-style-guide.md
+++ b/website/src/contribute/ptransform-style-guide.md
@@ -202,8 +202,8 @@ Do not:
 Do:
 
 * Generally, follow the rules of [semantic versioning](http://semver.org/).
-* If the API of the transform is not yet stable, annotate it as 
`@Experimental` (Java) or `@experimental` ([Python]({{ site.baseurl 
}}/documentation/sdks/pydoc/{{ site.release_latest 
}}/apache_beam.utils.annotations.html)).
-* If the API deprecated, annotate it as `@Deprecated` (Java) or `@deprecated` 
([Python]({{ site.baseurl }}/documentation/sdks/pydoc/{{ site.release_latest 
}}/apache_beam.utils.annotations.html)).
+* If the API of the transform is not yet stable, annotate it as 
`@Experimental` (Java) or `@experimental` 
([Python](https://beam.apache.org/releases/pydoc/{{ site.release_latest 
}}/apache_beam.utils.annotations.html)).
+* If the API deprecated, annotate it as `@Deprecated` (Java) or `@deprecated` 
([Python](https://beam.apache.org/releases/pydoc/{{ site.release_latest 
}}/apache_beam.utils.annotations.html)).
 * Pay attention to the stability and versioning of third-party classes exposed 
by the transform's API: if they are unstable or improperly versioned (do not 
obey [semantic versioning](http://semver.org/)), it is better to wrap them in 
your own classes.
 
 Do not:
diff --git a/website/src/contribute/release-guide.md 
b/website/src/contribute/release-guide.md
index cddad391811..e4dd601635f 100644
--- a/website/src/contribute/release-guide.md
+++ b/website/src/contribute/release-guide.md
@@ -567,7 +567,7 @@ One of the artifacts created in the release contains the 
Javadoc for the
 website. To update the website, you must unpack this jar file from the release
 candidate into the source tree of the website.
 
-Add the new Javadoc to [SDK API Reference page]({{ site.baseurl 
}}/documentation/sdks/javadoc/) page, as follows:
+Add the new Javadoc to [SDK API Reference 
page](https://beam.apache.org/releases/javadoc/) page, as follows:
 
 * Unpack the Maven artifact `org.apache.beam:beam-sdks-java-javadoc` into some 
temporary location. Call this `${JAVADOC_TMP}`.
 * Copy the generated Javadoc into the website repository: `cp -r 
${JAVADOC_TMP} src/documentation/sdks/javadoc/${RELEASE}`.
@@ -575,7 +575,7 @@ Add the new Javadoc to [SDK API Reference page]({{ 
site.baseurl }}/documentation
 * Update the Javadoc link on this page to point to the new version (in 
`src/documentation/sdks/javadoc/current.md`).
 
 ##### Create Pydoc
-Add the new Pydoc to [SDK API Reference page]({{ site.baseurl 
}}/documentation/sdks/pydoc/) page, as follows:
+Add the new Pydoc to [SDK API Reference 
page](https://beam.apache.org/releases/pydoc/) page, as follows:
 
 * Copy the generated Pydoc into the website repository: `cp -r ${PYDOC_ROOT} 
src/documentation/sdks/pydoc/${RELEASE}`.
 * Remove `.doctrees` directory.
@@ -595,7 +595,7 @@ Please follow the [user 
guide](https://github.com/apache/beam-wheels#user-guide)
 
 1. Maven artifacts deployed to the staging repository of 
[repository.apache.org](https://repository.apache.org/content/repositories/)
 1. Source distribution deployed to the dev repository of 
[dist.apache.org](https://dist.apache.org/repos/dist/dev/beam/)
-1. Website pull request proposed to list the [release]({{ site.baseurl 
}}/get-started/downloads/), publish the [Java API reference manual]({{ 
site.baseurl }}/documentation/sdks/javadoc/), and publish the [Python API 
reference manual]({{ site.baseurl }}/documentation/sdks/pydoc/).
+1. Website pull request proposed to list the [release]({{ site.baseurl 
}}/get-started/downloads/), publish the [Java API reference 
manual](https://beam.apache.org/releases/javadoc/), and publish the [Python API 
reference manual](https://beam.apache.org/releases/pydoc/).
 
 You can (optionally) also do additional verification by:
 1. Check that Python zip file contains the `README.md`, `NOTICE`, and 
`LICENSE` files.
@@ -958,7 +958,7 @@ Create and push a new signed tag for the released version 
by copying the tag for
 
 ### Merge website pull request
 
-Merge the website pull request to [list the release]({{ site.baseurl 
}}/get-started/downloads/), publish the [Python API reference manual]({{ 
site.baseurl }}/documentation/sdks/pydoc/), and the [Java API reference 
manual]({{ site.baseurl }}/documentation/sdks/javadoc/) created earlier.
+Merge the website pull request to [list the release]({{ site.baseurl 
}}/get-started/downloads/), publish the [Python API reference 
manual](https://beam.apache.org/releases/pydoc/), and the [Java API reference 
manual](https://beam.apache.org/releases/javadoc/) created earlier.
 
 ### Mark the version as released in JIRA
 
@@ -973,7 +973,7 @@ Use reporter.apache.org to seed the information about the 
release into future pr
 * Maven artifacts released and indexed in the [Maven Central 
Repository](https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.beam%22)
 * Source distribution available in the release repository of 
[dist.apache.org](https://dist.apache.org/repos/dist/release/beam/)
 * Source distribution removed from the dev repository of 
[dist.apache.org](https://dist.apache.org/repos/dist/dev/beam/)
-* Website pull request to [list the release]({{ site.baseurl 
}}/get-started/downloads/) and publish the [API reference manual]({{ 
site.baseurl }}/documentation/sdks/javadoc/) merged
+* Website pull request to [list the release]({{ site.baseurl 
}}/get-started/downloads/) and publish the [API reference 
manual](https://beam.apache.org/releases/javadoc/) merged
 * Release tagged in the source code repository
 * Release version finalized in JIRA. (Note: Not all committers have 
administrator access to JIRA. If you end up getting permissions errors ask on 
the mailing list for assistance.)
 * Release version is listed at reporter.apache.org
diff --git a/website/src/contribute/runner-guide.md 
b/website/src/contribute/runner-guide.md
index 212ff0445a6..a0aa2a81491 100644
--- a/website/src/contribute/runner-guide.md
+++ b/website/src/contribute/runner-guide.md
@@ -340,7 +340,7 @@ via the [Fn API](#the-fn-api) may manifest as another 
implementation of
 
 **Python**
 
-See the [DoFnRunner 
pydoc](https://beam.apache.org/documentation/sdks/pydoc/2.0.0/apache_beam.runners.html#apache_beam.runners.common.DoFnRunner).
+See the [DoFnRunner 
pydoc](https://beam.apache.org/releases/pydoc/2.0.0/apache_beam.runners.html#apache_beam.runners.common.DoFnRunner).
 
 #### Side Inputs
 
@@ -387,7 +387,7 @@ is used to implement this.
 
 **Python**
 
-In Python, 
[`SideInputMap`](https://beam.apache.org/documentation/sdks/pydoc/2.0.0/apache_beam.transforms.html#apache_beam.transforms.sideinputs.SideInputMap)
 maps
+In Python, 
[`SideInputMap`](https://beam.apache.org/releases/pydoc/2.0.0/apache_beam.transforms.html#apache_beam.transforms.sideinputs.SideInputMap)
 maps
 windows to side input values. The `WindowMappingFn` manifests as a simple
 function. See
 
[sideinputs.py](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/transforms/sideinputs.py).
@@ -443,9 +443,9 @@ have some special knowledge of the types involved.
 The elements you are processing will be key-value pairs, and you'll need to 
extract
 the keys. For this reason, the format of key-value pairs is standardized and
 shared across all SDKS. See either
-[`KvCoder`](https://beam.apache.org/documentation/sdks/javadoc/2.0.0/org/apache/beam/sdk/coders/KvCoder.html)
+[`KvCoder`](https://beam.apache.org/releases/javadoc/2.0.0/org/apache/beam/sdk/coders/KvCoder.html)
 in Java or
-[`TupleCoder`](https://beam.apache.org/documentation/sdks/pydoc/2.0.0/apache_beam.coders.html#apache_beam.coders.coders.TupleCoder.key_coder)
+[`TupleCoder`](https://beam.apache.org/releases/pydoc/2.0.0/apache_beam.coders.html#apache_beam.coders.coders.TupleCoder.key_coder)
 in Python for documentation on the binary format.
 
 #### Window Merging
@@ -610,9 +610,9 @@ it into primitives for your engine. The general pattern is 
to write a visitor
 that builds a job specification as it walks the graph of `PTransforms`.
 
 The entry point for this in Java is
-[`Pipeline.traverseTopologically`](https://beam.apache.org/documentation/sdks/javadoc/2.0.0/org/apache/beam/sdk/Pipeline.html#traverseTopologically-org.apache.beam.sdk.Pipeline.PipelineVisitor-)
+[`Pipeline.traverseTopologically`](https://beam.apache.org/releases/javadoc/2.0.0/org/apache/beam/sdk/Pipeline.html#traverseTopologically-org.apache.beam.sdk.Pipeline.PipelineVisitor-)
 and
-[`Pipeline.visit`](https://beam.apache.org/documentation/sdks/pydoc/2.0.0/apache_beam.html#apache_beam.pipeline.Pipeline.visit)
+[`Pipeline.visit`](https://beam.apache.org/releases/pydoc/2.0.0/apache_beam.html#apache_beam.pipeline.Pipeline.visit)
 in Python. See the generated documentation for details.
 
 ### Altering a pipeline
@@ -634,7 +634,7 @@ The Java SDK and the "runners core construction" library 
(the artifact is
 of work. In Python, support code is still under development.
 
 All pipeline alteration is done via
-[`Pipeline.replaceAll(PTransformOverride)`](https://beam.apache.org/documentation/sdks/javadoc/2.0.0/org/apache/beam/sdk/Pipeline.html#replaceAll-java.util.List-)
+[`Pipeline.replaceAll(PTransformOverride)`](https://beam.apache.org/releases/javadoc/2.0.0/org/apache/beam/sdk/Pipeline.html#replaceAll-java.util.List-)
 method. A
 
[`PTransformOverride`](https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/runners/PTransformOverride.java)
 is a pair of a
@@ -682,7 +682,7 @@ want the users of that SDK (such as Python) to use it.
 #### Allowing users to pass options to your runner
 
 The mechanism for configuration is
-[`PipelineOptions`](https://beam.apache.org/documentation/sdks/javadoc/2.0.0/org/apache/beam/sdk/options/PipelineOptions.html),
+[`PipelineOptions`](https://beam.apache.org/releases/javadoc/2.0.0/org/apache/beam/sdk/options/PipelineOptions.html),
 an interface that works completely differently than normal Java objects. Forget
 what you know, and follow the rules, and `PipelineOptions` will treat you well.
 
diff --git a/website/src/documentation/dsls/sql/create-table.md 
b/website/src/documentation/dsls/sql/create-table.md
index cfa1d2d1ecb..e481fe81766 100644
--- a/website/src/documentation/dsls/sql/create-table.md
+++ b/website/src/documentation/dsls/sql/create-table.md
@@ -212,7 +212,7 @@ TBLPROPERTIES '{"timestampAttributeKey": "key", 
"deadLetterQueue": "projects/[PR
         The attribute key is configured by the `timestampAttributeKey` field of
         the `tblProperties` blob. The value of the attribute should conform to
         the [requirements of
-        
PubsubIO](https://beam.apache.org/documentation/sdks/javadoc/2.4.0/org/apache/beam/sdk/io/gcp/pubsub/PubsubIO.Read.html#withTimestampAttribute-java.lang.String-),
+        
PubsubIO](https://beam.apache.org/releases/javadoc/2.4.0/org/apache/beam/sdk/io/gcp/pubsub/PubsubIO.Read.html#withTimestampAttribute-java.lang.String-),
         which is either millis since Unix epoch or [RFC 339
         ](https://www.ietf.org/rfc/rfc3339.txt)date string.
 *   `attributes`: The user-provided attributes map from the Pub/Sub message;
diff --git a/website/src/documentation/dsls/sql/overview.md 
b/website/src/documentation/dsls/sql/overview.md
index 7063b168e8a..6be9e436540 100644
--- a/website/src/documentation/dsls/sql/overview.md
+++ b/website/src/documentation/dsls/sql/overview.md
@@ -32,9 +32,9 @@ There are three main things you will need to know to use SQL 
in your pipeline:
    basic dialect underlying Beam SQL. We have added additional extensions to
    make it easy to leverage Beam's unified batch/streaming model and support
    for complex data types.
- - [SqlTransform]({{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/extensions/sql/SqlTransform.html): 
+ - [SqlTransform](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/extensions/sql/SqlTransform.html): 
    the interface for creating `PTransforms` from SQL queries.
- - [Row]({{ site.baseurl }}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/values/Row.html):
+ - [Row](https://beam.apache.org/releases/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/values/Row.html):
    the type of elements that Beam SQL operates on. A `PCollection<Row>` plays 
the role of a table.
 
 The [SQL pipeline walkthrough]({{ site.baseurl
diff --git a/website/src/documentation/dsls/sql/walkthrough.md 
b/website/src/documentation/dsls/sql/walkthrough.md
index 57fa8fb5c8b..8b8cec7370e 100644
--- a/website/src/documentation/dsls/sql/walkthrough.md
+++ b/website/src/documentation/dsls/sql/walkthrough.md
@@ -27,10 +27,9 @@ This page illustrates the usage of Beam SQL with example 
code.
 Before applying a SQL query to a `PCollection`, the data in the collection must
 be in `Row` format. A `Row` represents a single, immutable record in a Beam SQL
 `PCollection`. The names and types of the fields/columns in the row are defined
-by its associated [Schema]({{ site.baseurl }}/documentation/sdks/javadoc/{{
+by its associated [Schema](https://beam.apache.org/releases/javadoc/{{
 site.release_latest }}/index.html?org/apache/beam/sdk/schemas/Schema.html).
-You can use the [Schema.builder()]({{ site.baseurl
-}}/documentation/sdks/javadoc/{{ site.release_latest
+You can use the [Schema.builder()](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest
 }}/index.html?org/apache/beam/sdk/schemas/Schema.html) to create
 `Schemas`. See [Data
 Types]({{ site.baseurl }}/documentation/dsls/sql/data-types) for more details 
on supported primitive data types.
@@ -111,7 +110,7 @@ Once you have a `PCollection<Row>` in hand, you may use 
`SqlTransform` to apply
 
 ## SqlTransform
 
-[`SqlTransform.query(queryString)`]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/extensions/sql/SqlTransform.html) method is 
the only API to create a `PTransform`
+[`SqlTransform.query(queryString)`](https://beam.apache.org/releases/javadoc/{{
 site.release_latest 
}}/index.html?org/apache/beam/sdk/extensions/sql/SqlTransform.html) method is 
the only API to create a `PTransform`
 from a string representation of the SQL query. You can apply this `PTransform`
 to either a single `PCollection` or a `PCollectionTuple` which holds multiple
 `PCollections`:
diff --git a/website/src/documentation/pipelines/test-your-pipeline.md 
b/website/src/documentation/pipelines/test-your-pipeline.md
index e7835614870..130615056ab 100644
--- a/website/src/documentation/pipelines/test-your-pipeline.md
+++ b/website/src/documentation/pipelines/test-your-pipeline.md
@@ -174,7 +174,7 @@ Pipeline p = TestPipeline.create();
 You can use the `Create` transform to create a `PCollection` out of a standard 
in-memory collection class, such as Java `List`. See [Creating a 
PCollection]({{ site.baseurl 
}}/documentation/programming-guide/#creating-a-pcollection) for more 
information.
 
 ### PAssert
-[PAssert]({{ site.baseurl }}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/PAssert.html) is a class included in 
the Beam Java SDK  that is an assertion on the contents of a `PCollection`. You 
can use `PAssert`to verify that a `PCollection` contains a specific set of 
expected elements.
+[PAssert](https://beam.apache.org/releases/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/PAssert.html) is a class included in 
the Beam Java SDK  that is an assertion on the contents of a `PCollection`. You 
can use `PAssert`to verify that a `PCollection` contains a specific set of 
expected elements.
 
 For a given `PCollection`, you can use `PAssert` to verify the contents as 
follows:
 
@@ -200,7 +200,7 @@ Any code that uses `PAssert` must link in `JUnit` and 
`Hamcrest`. If you're usin
 </dependency>
 ```
 
-For more information on how these classes work, see the 
[org.apache.beam.sdk.testing]({{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/package-summary.html) package 
documentation.
+For more information on how these classes work, see the 
[org.apache.beam.sdk.testing](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/package-summary.html) package 
documentation.
 
 ### An Example Test for a Composite Transform
 
diff --git a/website/src/documentation/programming-guide.md 
b/website/src/documentation/programming-guide.md
index 9eb8db5d2db..f7b1996028f 100644
--- a/website/src/documentation/programming-guide.md
+++ b/website/src/documentation/programming-guide.md
@@ -106,7 +106,7 @@ asynchronous "job" (or equivalent) on that back-end.
 
 The `Pipeline` abstraction encapsulates all the data and steps in your data
 processing task. Your Beam driver program typically starts by constructing a
-<span class="language-java">[Pipeline]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/Pipeline.html)</span>
+<span 
class="language-java">[Pipeline](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/index.html?org/apache/beam/sdk/Pipeline.html)</span>
 <span 
class="language-py">[Pipeline](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/pipeline.py)</span>
 object, and then using that object as the basis for creating the pipeline's 
data
 sets as `PCollection`s and its operations as `Transform`s.
@@ -234,7 +234,7 @@ Now your pipeline can accept `--myCustomOption=value` as a 
command-line argument
 
 ## 3. PCollections {#pcollections}
 
-The <span class="language-java">[PCollection]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/values/PCollection.html)</span>
+The <span 
class="language-java">[PCollection](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/values/PCollection.html)</span>
 <span class="language-py">`PCollection`</span> abstraction represents a
 potentially distributed, multi-element data set. You can think of a
 `PCollection` as "pipeline" data; Beam transforms use `PCollection` objects as
@@ -924,7 +924,7 @@ The formatted data looks like this:
 
 #### 4.2.4. Combine {#combine}
 
-<span class="language-java">[`Combine`]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/Combine.html)</span>
+<span 
class="language-java">[`Combine`](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/Combine.html)</span>
 <span 
class="language-py">[`Combine`](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/transforms/core.py)</span>
 is a Beam transform for combining collections of elements or values in your
 data. `Combine` has variants that work on entire `PCollection`s, and some that
@@ -1153,7 +1153,7 @@ player_accuracies = ...
 
 #### 4.2.5. Flatten {#flatten}
 
-<span class="language-java">[`Flatten`]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/Flatten.html)</span>
+<span 
class="language-java">[`Flatten`](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/Flatten.html)</span>
 <span 
class="language-py">[`Flatten`](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/transforms/core.py)</span>
 and
 is a Beam transform for `PCollection` objects that store the same data type.
 `Flatten` merges multiple `PCollection` objects into a single logical
@@ -1202,7 +1202,7 @@ pipeline is constructed.
 
 #### 4.2.6. Partition {#partition}
 
-<span class="language-java">[`Partition`]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/Partition.html)</span>
+<span 
class="language-java">[`Partition`](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/Partition.html)</span>
 <span 
class="language-py">[`Partition`](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/transforms/core.py)</span>
 is a Beam transform for `PCollection` objects that store the same data
 type. `Partition` splits a single `PCollection` into a fixed number of smaller
@@ -1587,8 +1587,8 @@ transform can make your code more modular and easier to 
understand.
 
 The Beam SDK comes packed with many useful composite transforms. See the API
 reference pages for a list of transforms:
-  * [Pre-written Beam transforms for Java]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/package-summary.html)
-  * [Pre-written Beam transforms for Python]({{ site.baseurl 
}}/documentation/sdks/pydoc/{{ site.release_latest 
}}/apache_beam.transforms.html)
+  * [Pre-written Beam transforms for 
Java](https://beam.apache.org/releases/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/package-summary.html)
+  * [Pre-written Beam transforms for 
Python](https://beam.apache.org/releases/pydoc/{{ site.release_latest 
}}/apache_beam.transforms.html)
 
 #### 4.6.1. An example composite transform {#composite-transform-example}
 
@@ -2164,7 +2164,7 @@ all the elements are by default part of a single, global 
window.
 To use windowing with fixed data sets, you can assign your own timestamps to
 each element. To assign timestamps to elements, use a `ParDo` transform with a
 `DoFn` that outputs each element with a new timestamp (for example, the
-[WithTimestamps]({{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/WithTimestamps.html)
+[WithTimestamps](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/WithTimestamps.html)
 transform in the Beam SDK for Java).
 
 To illustrate how windowing with a bounded `PCollection` can affect how your
diff --git a/website/src/documentation/runners/dataflow.md 
b/website/src/documentation/runners/dataflow.md
index 99e2b6d4bcc..1cd28baff4e 100644
--- a/website/src/documentation/runners/dataflow.md
+++ b/website/src/documentation/runners/dataflow.md
@@ -203,8 +203,8 @@ java -jar target/beam-examples-bundled-1.0.0.jar \
 </table>
 
 See the reference documentation for the
-<span class="language-java">[DataflowPipelineOptions]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/runners/dataflow/options/DataflowPipelineOptions.html)</span>
-<span class="language-py">[`PipelineOptions`]({{ site.baseurl 
}}/documentation/sdks/pydoc/{{ site.release_latest 
}}/apache_beam.options.pipeline_options.html#apache_beam.options.pipeline_options.PipelineOptions)</span>
+<span 
class="language-java">[DataflowPipelineOptions](https://beam.apache.org/releases/javadoc/{{
 site.release_latest 
}}/index.html?org/apache/beam/runners/dataflow/options/DataflowPipelineOptions.html)</span>
+<span 
class="language-py">[`PipelineOptions`](https://beam.apache.org/releases/pydoc/{{
 site.release_latest 
}}/apache_beam.options.pipeline_options.html#apache_beam.options.pipeline_options.PipelineOptions)</span>
 interface (and any subinterfaces) for additional pipeline configuration 
options.
 
 ## Additional information and caveats {#additional-info}
diff --git a/website/src/documentation/runners/direct.md 
b/website/src/documentation/runners/direct.md
index ce2e8d3145d..f61619f8470 100644
--- a/website/src/documentation/runners/direct.md
+++ b/website/src/documentation/runners/direct.md
@@ -40,11 +40,11 @@ Using the Direct Runner for testing and development helps 
ensure that pipelines
 Here are some resources with information about how to test your pipelines.
 <ul>
   <!-- Java specific links -->
-  <li class="language-java"><a href="{{ site.baseurl 
}}/blog/2016/10/20/test-stream.html">Testing Unbounded Pipelines in Apache 
Beam</a> talks about the use of Java classes <a href="{{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/PAssert.html">PAssert</a> and <a 
href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/TestStream.html">TestStream</a> to 
test your pipelines.</li>
-  <li class="language-java">The <a href="{{ site.baseurl 
}}/get-started/wordcount-example/#testing-your-pipeline-with-asserts">Apache 
Beam WordCount Walkthrough</a> contains an example of logging and testing a 
pipeline with <a href="{{ site.baseurl }}/documentation/sdks/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/PAssert.html">PAssert</a>.</li>
+  <li class="language-java"><a href="{{ site.baseurl 
}}/blog/2016/10/20/test-stream.html">Testing Unbounded Pipelines in Apache 
Beam</a> talks about the use of Java classes <a 
href="https://beam.apache.org/releases/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/PAssert.html">PAssert</a> and <a 
href="https://beam.apache.org/releases/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/TestStream.html">TestStream</a> to 
test your pipelines.</li>
+  <li class="language-java">The <a href="{{ site.baseurl 
}}/get-started/wordcount-example/#testing-your-pipeline-with-asserts">Apache 
Beam WordCount Walkthrough</a> contains an example of logging and testing a 
pipeline with <a href="https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/testing/PAssert.html">PAssert</a>.</li>
 
   <!-- Python specific links -->
-  <li class="language-py">The <a href="{{ site.baseurl 
}}/get-started/wordcount-example/#testing-your-pipeline-with-asserts">Apache 
Beam WordCount Walkthrough</a> contains an example of logging and testing a 
pipeline with <a href="{{ site.baseurl }}/documentation/sdks/pydoc/{{ 
site.release_latest 
}}/apache_beam.testing.util.html#apache_beam.testing.util.assert_that">assert_that</a>.</li>
+  <li class="language-py">The <a href="{{ site.baseurl 
}}/get-started/wordcount-example/#testing-your-pipeline-with-asserts">Apache 
Beam WordCount Walkthrough</a> contains an example of logging and testing a 
pipeline with <a href="https://beam.apache.org/releases/pydoc/{{ 
site.release_latest 
}}/apache_beam.testing.util.html#apache_beam.testing.util.assert_that">assert_that</a>.</li>
 </ul>
 
 ## Direct Runner prerequisites and setup
@@ -68,15 +68,15 @@ Here are some resources with information about how to test 
your pipelines.
 When executing your pipeline from the command-line, set `runner` to `direct` 
or `DirectRunner`. The default values for the other pipeline options are 
generally sufficient.
 
 See the reference documentation for the
-<span class="language-java">[`DirectOptions`]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/runners/direct/DirectOptions.html)</span>
-<span class="language-py">[`DirectOptions`]({{ site.baseurl 
}}/documentation/sdks/pydoc/{{ site.release_latest 
}}/apache_beam.options.pipeline_options.html#apache_beam.options.pipeline_options.DirectOptions)</span>
+<span 
class="language-java">[`DirectOptions`](https://beam.apache.org/releases/javadoc/{{
 site.release_latest 
}}/index.html?org/apache/beam/runners/direct/DirectOptions.html)</span>
+<span 
class="language-py">[`DirectOptions`](https://beam.apache.org/releases/pydoc/{{ 
site.release_latest 
}}/apache_beam.options.pipeline_options.html#apache_beam.options.pipeline_options.DirectOptions)</span>
 interface for defaults and additional pipeline configuration options.
 
 ## Additional information and caveats
 
 ### Memory considerations
 
-Local execution is limited by the memory available in your local environment. 
It is highly recommended that you run your pipeline with data sets small enough 
to fit in local memory. You can create a small in-memory data set using a <span 
class="language-java">[`Create`]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/Create.html)</span><span 
class="language-py">[`Create`](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/transforms/core.py)</span>
 transform, or you can use a <span class="language-java">[`Read`]({{ 
site.baseurl }}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/sdk/io/Read.html)</span><span 
class="language-py">[`Read`](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/iobase.py)</span>
 transform to work with small local or remote files.
+Local execution is limited by the memory available in your local environment. 
It is highly recommended that you run your pipeline with data sets small enough 
to fit in local memory. You can create a small in-memory data set using a <span 
class="language-java">[`Create`](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest 
}}/index.html?org/apache/beam/sdk/transforms/Create.html)</span><span 
class="language-py">[`Create`](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/transforms/core.py)</span>
 transform, or you can use a <span 
class="language-java">[`Read`](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/index.html?org/apache/beam/sdk/io/Read.html)</span><span 
class="language-py">[`Read`](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/io/iobase.py)</span>
 transform to work with small local or remote files.
 
 ### Streaming execution
 
diff --git a/website/src/documentation/runners/flink.md 
b/website/src/documentation/runners/flink.md
index a2cac758791..ccd9df8b81e 100644
--- a/website/src/documentation/runners/flink.md
+++ b/website/src/documentation/runners/flink.md
@@ -177,7 +177,7 @@ When executing your pipeline with the Flink Runner, you can 
set these pipeline o
 </tr>
 </table>
 
-See the reference documentation for the  <span 
class="language-java">[FlinkPipelineOptions]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/index.html?org/apache/beam/runners/flink/FlinkPipelineOptions.html)</span><span
 
class="language-py">[PipelineOptions](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/options/pipeline_options.py)</span>
 interface (and its subinterfaces) for the complete list of pipeline 
configuration options.
+See the reference documentation for the  <span 
class="language-java">[FlinkPipelineOptions](https://beam.apache.org/releases/javadoc/{{
 site.release_latest 
}}/index.html?org/apache/beam/runners/flink/FlinkPipelineOptions.html)</span><span
 
class="language-py">[PipelineOptions](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/options/pipeline_options.py)</span>
 interface (and its subinterfaces) for the complete list of pipeline 
configuration options.
 
 ## Additional information and caveats
 
diff --git a/website/src/documentation/sdks/java.md 
b/website/src/documentation/sdks/java.md
index 0348d269996..b0078083f0b 100644
--- a/website/src/documentation/sdks/java.md
+++ b/website/src/documentation/sdks/java.md
@@ -27,7 +27,7 @@ The Java SDK for Apache Beam provides a simple, powerful API 
for building both b
 
 Get started with the [Beam Programming Model]({{ site.baseurl 
}}/documentation/programming-guide/) to learn the basic concepts that apply to 
all SDKs in Beam.
 
-See the [Java API Reference]({{ site.baseurl }}/documentation/sdks/javadoc/) 
for more information on individual APIs.
+See the [Java API Reference](https://beam.apache.org/releases/javadoc/) for 
more information on individual APIs.
 
 
 ## Supported Features
diff --git a/website/src/documentation/sdks/python.md 
b/website/src/documentation/sdks/python.md
index ac110064d99..ae19ee69930 100644
--- a/website/src/documentation/sdks/python.md
+++ b/website/src/documentation/sdks/python.md
@@ -25,7 +25,7 @@ The Python SDK for Apache Beam provides a simple, powerful 
API for building batc
 
 Get started with the [Beam Python SDK quickstart]({{ site.baseurl 
}}/get-started/quickstart-py) to set up your Python development environment, 
get the Beam SDK for Python, and run an example pipeline. Then, read through 
the [Beam programming guide]({{ site.baseurl 
}}/documentation/programming-guide) to learn the basic concepts that apply to 
all SDKs in Beam.
 
-See the [Python API reference]({{ site.baseurl }}/documentation/sdks/pydoc/) 
for more information on individual APIs.
+See the [Python API reference](https://beam.apache.org/releases/pydoc/) for 
more information on individual APIs.
 
 ## Python streaming pipelines
 
diff --git a/website/src/get-started/downloads.md 
b/website/src/get-started/downloads.md
index 608774f9812..6f66479bb6d 100644
--- a/website/src/get-started/downloads.md
+++ b/website/src/get-started/downloads.md
@@ -71,7 +71,7 @@ the form `major.minor.incremental` and are incremented as 
follows:
 * minor version for new functionality added in a backward-compatible manner
 * incremental version for forward-compatible bug fixes
 
-Please note that APIs marked [`@Experimental`]({{ site.baseurl 
}}/documentation/sdks/javadoc/{{ site.release_latest 
}}/org/apache/beam/sdk/annotations/Experimental.html)
+Please note that APIs marked 
[`@Experimental`](https://beam.apache.org/releases/javadoc/{{ 
site.release_latest }}/org/apache/beam/sdk/annotations/Experimental.html)
 may change at any point and are not guaranteed to remain compatible across 
versions.
 
 Additionally, any API may change before the first stable release, i.e., between
diff --git a/website/src/get-started/quickstart-java.md 
b/website/src/get-started/quickstart-java.md
index 12e98d9994f..cc070fec71f 100644
--- a/website/src/get-started/quickstart-java.md
+++ b/website/src/get-started/quickstart-java.md
@@ -363,7 +363,7 @@ has: 2
 ## Next Steps
 
 * Learn more about the [Beam SDK for Java]({{ site.baseurl 
}}/documentation/sdks/java/)
-  and look through the [Java SDK API reference]({{ site.baseurl 
}}/documentation/sdks/javadoc).
+  and look through the [Java SDK API 
reference](https://beam.apache.org/releases/javadoc).
 * Walk through these WordCount examples in the [WordCount Example 
Walkthrough]({{ site.baseurl }}/get-started/wordcount-example).
 * Dive in to some of our favorite [articles and presentations]({{ site.baseurl 
}}/documentation/resources).
 * Join the Beam [users@]({{ site.baseurl }}/community/contact-us) mailing list.
diff --git a/website/src/get-started/quickstart-py.md 
b/website/src/get-started/quickstart-py.md
index b199c5e805c..d6da9ef7ea4 100644
--- a/website/src/get-started/quickstart-py.md
+++ b/website/src/get-started/quickstart-py.md
@@ -207,7 +207,7 @@ sequentially in the format `counts-0000-of-0001`.
 ## Next Steps
 
 * Learn more about the [Beam SDK for Python]({{ site.baseurl 
}}/documentation/sdks/python/)
-  and look through the [Python SDK API reference]({{ site.baseurl 
}}/documentation/sdks/pydoc).
+  and look through the [Python SDK API 
reference](https://beam.apache.org/releases/pydoc).
 * Walk through these WordCount examples in the [WordCount Example 
Walkthrough]({{ site.baseurl }}/get-started/wordcount-example).
 * Dive in to some of our favorite [articles and presentations]({{ site.baseurl 
}}/documentation/resources).
 * Join the Beam [users@]({{ site.baseurl }}/community/contact-us) mailing list.
diff --git a/website/src/get-started/wordcount-example.md 
b/website/src/get-started/wordcount-example.md
index 684f5acada7..a1cdbd12662 100644
--- a/website/src/get-started/wordcount-example.md
+++ b/website/src/get-started/wordcount-example.md
@@ -1390,7 +1390,7 @@ To view the full code in Python, see
 
 This example uses an unbounded dataset as input. The code reads Pub/Sub
 messages from a Pub/Sub subscription or topic using
-[`beam.io.ReadStringsFromPubSub`]({{ site.baseurl 
}}/documentation/sdks/pydoc/{{ site.release_latest 
}}/apache_beam.io.gcp.pubsub.html#apache_beam.io.gcp.pubsub.ReadStringsFromPubSub).
+[`beam.io.ReadStringsFromPubSub`](https://beam.apache.org/releases/pydoc/{{ 
site.release_latest 
}}/apache_beam.io.gcp.pubsub.html#apache_beam.io.gcp.pubsub.ReadStringsFromPubSub).
 
 ```java
   // This example is not currently available for the Beam SDK for Java.
@@ -1416,7 +1416,7 @@ outputs.
 
 This example uses an unbounded `PCollection` and streams the results to
 Google Pub/Sub. The code formats the results and writes them to a Pub/Sub topic
-using [`beam.io.WriteStringsToPubSub`]({{ site.baseurl 
}}/documentation/sdks/pydoc/{{ site.release_latest 
}}/apache_beam.io.gcp.pubsub.html#apache_beam.io.gcp.pubsub.WriteStringsToPubSub).
+using 
[`beam.io.WriteStringsToPubSub`](https://beam.apache.org/releases/pydoc/{{ 
site.release_latest 
}}/apache_beam.io.gcp.pubsub.html#apache_beam.io.gcp.pubsub.WriteStringsToPubSub).
 
 ```java
   // This example is not currently available for the Beam SDK for Java.


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 151332)
    Time Spent: 3h 10m  (was: 3h)

> Migrate release Javadocs / Pydocs to [asf-site] branch and update release 
> guide
> -------------------------------------------------------------------------------
>
>                 Key: BEAM-4498
>                 URL: https://issues.apache.org/jira/browse/BEAM-4498
>             Project: Beam
>          Issue Type: Sub-task
>          Components: website
>            Reporter: Scott Wegner
>            Assignee: Scott Wegner
>            Priority: Major
>              Labels: beam-site-automation-reliability
>             Fix For: Not applicable
>
>          Time Spent: 3h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to