This is an automated email from the ASF dual-hosted git repository.

joshfischer pushed a commit to branch joshfischer/bazel-3-docs
in repository https://gitbox.apache.org/repos/asf/incubator-heron.git


The following commit(s) were added to refs/heads/joshfischer/bazel-3-docs by 
this push:
     new 2e345c9  adding temporary release candidate docs
2e345c9 is described below

commit 2e345c9e44c4c5335c41baaa880824f6713a5b3b
Author: Josh Fischer <j...@joshfischer.io>
AuthorDate: Thu Apr 16 21:42:55 2020 -0500

    adding temporary release candidate docs
---
 website2/website/scripts/replace.js                |   3 +-
 .../compiling-code-organization.md                 | 199 +++++
 .../compiling-docker.md                            | 253 +++++++
 .../compiling-linux.md                             | 212 ++++++
 .../version-0.20.2-incubating-rc2/compiling-osx.md |  87 +++
 .../heron-streamlet-concepts.md                    | 811 +++++++++++++++++++++
 .../schedulers-k8s-with-helm.md                    | 304 ++++++++
 website2/website/versions.json                     |   1 +
 8 files changed, 1869 insertions(+), 1 deletion(-)

diff --git a/website2/website/scripts/replace.js 
b/website2/website/scripts/replace.js
index ea5a9b0..f9f7ace 100755
--- a/website2/website/scripts/replace.js
+++ b/website2/website/scripts/replace.js
@@ -38,7 +38,8 @@ const bazelVersions = {
     '0.20.0-incubating': '0.14.1',
     '0.20.1-incubating': '0.26.0',
     '0.20.2-incubating': '0.26.0',
-    'next': '3.0.0',
+    '0.20.2-incubating-rc2': '3.0.0'.
+    'latest': '3.0.0',
 }
 
 function replaceBazel(version) {
diff --git 
a/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-code-organization.md
 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-code-organization.md
new file mode 100644
index 0000000..4663779
--- /dev/null
+++ 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-code-organization.md
@@ -0,0 +1,199 @@
+---
+id: version-0.20.2-incubating-rc2-compiling-code-organization
+title: Code Organization
+sidebar_label: Code Organization
+original_id: compiling-code-organization
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+This document contains information about the Heron codebase intended primarily
+for developers who want to contribute to Heron. The Heron codebase lives on
+[github]({{% githubMaster %}}).
+
+If you're looking for documentation about developing topologies for a Heron
+cluster, see [Building Topologies](topology-development-topology-api-java) 
instead.
+
+## Languages
+
+The primary programming languages for Heron are C++, Java, and Python.
+
+* **C++ 11** is used for most of Heron's core components, including the
+[Topology Master](heron-architecture#topology-master), and
+[Stream Manager](heron-architecture#stream-manager).
+
+* **Java 11** is used primarily for Heron's [topology
+API](heron-topology-concepts), and [Heron 
Instance](heron-architecture#heron-instance).
+It is currently the only language in which topologies can be written. 
Instructions can be found
+in [Building Topologies](../../developers/java/topologies), while 
documentation for the Java
+API can be found 
[here](/api/org/apache/heron/api/topology/package-summary.html). Please note 
that Heron topologies do not require Java 11 and can be written in Java 7 or 
later.
+
+* **Python 2** (specifically 2.7) is used primarily for Heron's [CLI 
interface](user-manuals-heron-cli) and UI components such as [Heron 
UI](user-manuals-heron-ui) and the [Heron 
Tracker](user-manuals-heron-tracker-runbook).
+
+## Main Tools
+
+* **Build tool** --- Heron uses [Bazel](http://bazel.io/) as its build tool.
+Information on setting up and using Bazel for Heron can be found in [Compiling 
Heron](compiling-overview).
+
+* **Inter-component communication** --- Heron uses [Protocol
+Buffers](https://developers.google.com/protocol-buffers/?hl=en) for
+communication between components. Most `.proto` definition files can be found 
in
+[`heron/proto`]({{% githubMaster %}}/heron/proto).
+
+* **Cluster coordination** --- Heron relies heavily on ZooKeeper for cluster
+coordination for distributed deployment, be it for 
[Aurora](schedulers-aurora-cluster) or for a [custom
+scheduler](extending-heron-scheduler) that you build. More information on 
ZooKeeper
+components in the codebase can be found in the [State
+Management](#state-management) section below.
+
+## Common Utilities
+
+The [`heron/common`]({{% githubMaster %}}/heron/common) contains a variety of
+utilities for each of Heron's languages, including useful constants, file
+utilities, networking interfaces, and more.
+
+## Cluster Scheduling
+
+Heron supports two cluster schedulers out of the box:
+[Aurora](schedulers-aurora-cluster) and a [local
+scheduler](schedulers-local). The Java code for each of those
+schedulers can be found in [`heron/schedulers`]({{% githubMaster 
%}}/heron/schedulers)
+, while the underlying scheduler API can be found 
[here](/api/org/apache/heron/spi/scheduler/package-summary.html)
+
+Info on custom schedulers can be found in [Implementing a Custom
+Scheduler](extending-heron-scheduler); info on the currently available 
schedulers
+can be found in [Deploying Heron on
+Aurora](schedulers-aurora-cluster) and [Local
+Deployment](schedulers-local).
+
+## State Management
+
+The parts of Heron's codebase related to
+[ZooKeeper](http://zookeeper.apache.org/) are mostly contained in
+[`heron/state`]({{% githubMaster %}}/heron/state). There are ZooKeeper-facing
+interfaces for [C++]({{% githubMaster %}}/heron/state/src/cpp),
+[Java]({{% githubMaster %}}/heron/state/src/java), and
+[Python]({{% githubMaster %}}/heron/state/src/python) that are used in a 
variety of
+Heron components.
+
+## Topology Components
+
+### Topology Master
+
+The C++ code for Heron's [Topology
+Master](heron-architecture#topology-master) is written in C++ can be
+found in [`heron/tmaster`]({{% githubMaster %}}/heron/tmaster).
+
+### Stream Manager
+
+The C++ code for Heron's [Stream
+Manager](heron-architecture#stream-manager) can be found in
+[`heron/stmgr`]({{% githubMaster %}}/heron/stmgr).
+
+### Heron Instance
+
+The Java code for [Heron
+instances](heron-architecture#heron-instance) can be found in
+[`heron/instance`]({{% githubMaster %}}/heron/instance).
+
+### Metrics Manager
+
+The Java code for Heron's [Metrics
+Manager](heron-architecture#metrics-manager) can be found in
+[`heron/metricsmgr`]({{% githubMaster %}}/heron/metricsmgr).
+
+If you'd like to implement your own custom metrics handler (known as a 
**metrics
+sink**), see [Implementing a Custom Metrics Sink](extending-heron-metric-sink).
+
+## Developer APIs
+
+### Topology API
+
+Heron's API for writing topologies is written in Java. The code for this API 
can
+be found in [`heron/api`]({{% githubMaster %}}/heron/api).
+
+Documentation for writing topologies can be found in [Building
+Topologies](topology-development-topology-api-java), while API documentation 
can be found
+[here](/api/org/apache/heron/api/topology/package-summary.html).
+
+### Simulator
+
+Heron enables you to run topologies in [`Simulator`](guides-simulator-mode)
+for debugging purposes.
+
+The Java API for simulator can be found in
+[`heron/simulator`](/api/org/apache/heron/simulator/package-summary.html).
+
+### Example Topologies
+
+Heron's codebase includes a wide variety of example
+[topologies](heron-topology-concepts) built using Heron's topology API for
+Java. Those examples can be found in
+[`heron/examples`]({{% githubMaster %}}/heron/examples).
+
+## User Interface Components
+
+### Heron CLI
+
+Heron has a tool called `heron` that is used to both provide a CLI interface
+for [managing topologies](user-manuals-heron-cli) and to perform much of
+the heavy lifting behind assembling physical topologies in your cluster.
+The Python code for `heron` can be found in
+[`heron/tools/cli`]({{% githubMaster %}}/heron/tools/cli).
+
+Sample configurations for different Heron schedulers
+
+* [Local scheduler](schedulers-local) config can be found in 
[`heron/config/src/yaml/conf/local`]({{% githubMaster 
%}}/heron/config/src/yaml/conf/local),
+* [Aurora scheduler](schedulers-aurora-cluster) config can be found 
[`heron/config/src/yaml/conf/aurora`]({{% githubMaster 
%}}/heron/config/src/yaml/conf/aurora).
+
+### Heron Tracker
+
+The Python code for the [Heron Tracker](user-manuals-heron-tracker-runbook) 
can be
+found in [`heron/tools/tracker`]({{% githubMaster %}}/heron/tools/tracker).
+
+The Tracker is a web server written in Python. It relies on the
+[Tornado](http://www.tornadoweb.org/en/stable/) framework. You can add new HTTP
+routes to the Tracker in
+[`main.py`]({{% githubMaster %}}/heron/tools/tracker/src/python/main.py) and
+corresponding handlers in the
+[`handlers`]({{% githubMaster %}}/heron/tools/tracker/src/python/handlers) 
directory.
+
+### Heron UI
+
+The Python code for the [Heron UI](user-manuals-heron-ui) can be found in
+[`heron/tools/ui`]({{% githubMaster %}}/heron/tools/ui).
+
+Like Heron Tracker, Heron UI is a web server written in Python that relies on
+the [Tornado](http://www.tornadoweb.org/en/stable/) framework. You can add new
+HTTP routes to Heron UI in
+[`main.py`]({{% githubMaster %}}/heron/web/source/python/main.py) and 
corresponding
+handlers in the [`handlers`]({{% githubMaster 
%}}/heron/web/source/python/handlers)
+directory.
+
+### Heron Shell
+
+The Python code for the [Heron Shell](user-manuals-heron-shell) can be
+found in [`heron/shell`]({{% githubMaster %}}/heron/shell). The HTTP handlers 
and
+web server are defined in
+[`main.py`]({{% githubMaster %}}/heron/shell/src/python/main.py) while the 
HTML,
+JavaScript, CSS, and images for the web UI can be found in the
+[`assets`]({{% githubMaster %}}/heron/shell/assets) directory.
+
+## Tests
+
+There are a wide variety of tests for Heron that are scattered throughout the
+codebase. For more info see [Testing Heron](compiling-running-tests).
diff --git 
a/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-docker.md
 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-docker.md
new file mode 100644
index 0000000..8688ee9
--- /dev/null
+++ 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-docker.md
@@ -0,0 +1,253 @@
+---
+id: version-0.20.2-incubating-rc2-compiling-docker
+title: Compiling With Docker
+sidebar_label: Compiling With Docker
+original_id: compiling-docker
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+For developing Heron, you will need to compile it for the environment that you
+want to use it in. If you'd like to use Docker to create that build 
environment,
+Heron provides a convenient script to make that process easier.
+
+Currently Debian10 and Ubuntu 18.04 are actively being supported.  There is 
also limited support for Ubuntu 14.04, Debian9, and CentOS 7. If you
+need another platform there are instructions for adding new ones
+[below](#contributing-new-environments).
+
+### Requirements
+
+* [Docker](https://docs.docker.com)
+
+### Running Docker in a Virtual Machine
+
+If you are running Docker in a virtual machine (VM), it is recommended that you
+adjust your settings to help speed up the build. To do this, open
+[VirtualBox](https://www.virtualbox.org/wiki/Downloads) and go to the container
+in which Docker is running (usually "default" or whatever name you used to
+create the VM), click on the VM, and then click on **Settings**.
+
+**Note**: You will need to stop the VM before modifying these settings.
+
+![VirtualBox Processors](assets/virtual-box-processors.png)
+![VirtualBox Memory](assets/virtual-box-memory.png)
+
+## Building Heron
+
+Heron provides a `build-arfifacts.sh` script for Docker located in the
+`docker` folder. To run that script:
+
+```bash
+$ cd /path/to/heron/repo
+$ docker/build-artifacts.sh
+```
+
+Running the script by itself will display usage information:
+
+```
+Script to build heron docker image for different platforms
+  Input - directory containing the artifacts from the directory 
<artifact-directory>
+  Output - docker image tar file saved in the directory <artifact-directory> 
+  
+Usage: ./docker/scripts/build-docker.sh <platform> <version_string> 
<artifact-directory> [-s|--squash]
+  
+Argument options:
+  <platform>: darwin, debian9, debian10, ubuntu14.04, ubuntu18.04, centos7
+  <version_string>: Version of Heron build, e.g. v0.17.5.1-rc
+  <artifact-directory>: Location of compiled Heron artifact
+  [-s|--squash]: Enables using Docker experimental feature --squash
+  
+Example:
+  ./build-docker.sh ubuntu18.04 0.12.0 ~/ubuntu
+
+NOTE: If running on OSX, the output directory will need to
+      be under /Users so virtualbox has access to.
+```
+
+The following arguments are required:
+
+* `platform` --- Currently we are focused on supporting the `debian10` and 
`ubuntu18.04` platforms.  
+We also support building Heron locally on OSX.  You can specify this as 
listing `darwin` as the platform.
+ All options are:
+   - `centos7`
+   - `darwin`
+   - `debian9`
+   - `debian10`
+   - `ubuntu14.04`
+   - `ubuntu18.04`
+    
+   
+  You can add other platforms using the [instructions
+  below](#contributing-new-environments).
+* `version-string` --- The Heron release for which you'd like to build
+  artifacts.
+* `output-directory` --- The directory in which you'd like the release to be
+  built.
+
+Here's an example usage:
+
+```bash
+$ docker/scripts/build-artifacts.sh debian10 0.22.1-incubating ~/heron-release
+```
+
+This will build a Docker container specific to Debian10, create a source
+tarball of the Heron repository, run a full release build of Heron, and then
+copy the artifacts into the `~/heron-release` directory.
+
+Optionally, you can also include a tarball of the Heron source if you have one.
+By default, the script will create a tarball of the current source in the Heron
+repo and use that to build the artifacts.
+
+**Note**: If you are running on Mac OS X, Docker must be run inside a VM.
+Therefore, you must make sure that both the source tarball and destination
+directory are somewhere under your home directory. For example, you cannot
+output the Heron artifacts to `/tmp` because `/tmp` refers to the directory
+inside the VM, not on the host machine. Your home directory, however, is
+automatically linked in to the VM and can be accessed normally.
+
+After the build has completed, you can go to your output directory and see all
+of the generated artifacts:
+
+```bash
+$ ls ~/heron-release
+heron-0.22.1-incubating-debian10.tar
+heron-0.22.1-incubating-debian10.tar.gz
+heron-core-0.22.1-incubating-debian10.tar.gz
+heron-install-0.22.1-incubating-debian10.sh
+heron-layer-0.22.1-incubating-debian10.tar
+heron-tools-0.22.1-incubating-debian10.tar.gz
+```
+
+## Set Up A Docker Based Development Environment
+
+In case you want to have a development environment instead of making a full 
build,
+Heron provides two helper scripts for you. It could be convenient if you don't 
want
+to set up all the libraries and tools on your machine directly.
+
+The following commands are to create a new docker image with a development 
environment
+and start the container based on it:
+```bash
+$ cd /path/to/heron/repo
+$ docker/scripts/dev-env-create.sh heron-dev
+```
+
+After the commands, a new docker container is started with all the libraries 
and tools
+installed. The operation system is Ubuntu 18.04 by default. Now you can build 
Heron
+like:
+```bash
+\# bazel build --config=debian scripts/packages:binpkgs
+\# bazel build --config=debian scripts/packages:tarpkgs
+```
+
+The current folder is mapped to the '/heron' directory in the container and 
any changes
+you make on the host machine will be reflected in the container. Note that 
when you exit
+the container and re-run the script, a new container will be started with a 
fresh new
+environment.
+
+When a development environment container is running, you can use the follow 
script
+to start a new terminal in the container.
+```bash
+$ cd /path/to/heron/repo
+$ docker/scripts/dev-env-run.sh heron-dev
+```
+
+## Contributing New Environments
+
+You'll notice that there are multiple
+[Dockerfiles](https://docs.docker.com/engine/reference/builder/) in the 
`docker`
+directory of Heron's source code, one for each of the currently supported
+platforms.
+
+To add support for a new platform, add a new `Dockerfile` to that directory and
+append the name of the platform to the name of the file. If you'd like to add
+support for Debian 8, for example, add a file named `Dockerfile.debian8`. Once
+you've done that, follow the instructions in the [Docker
+documentation](https://docs.docker.com/engine/articles/dockerfile_best-practices/).
+
+You should make sure that your `Dockerfile` specifies *at least* all of the
+following:
+
+### Step 1 --- The OS being used in a 
[`FROM`](https://docs.docker.com/engine/reference/builder/#from) statement.
+
+Here's an example:
+
+```dockerfile
+FROM centos:centos7
+ ```
+
+### Step 2 --- A `TARGET_PLATFORM` environment variable using the 
[`ENV`](https://docs.docker.com/engine/reference/builder/#env) instruction.
+
+Here's an example:
+
+```dockerfile
+ENV TARGET_PLATFORM centos
+```
+
+### Step 3 --- A general dependency installation script using a 
[`RUN`](https://docs.docker.com/engine/reference/builder/#run) instruction.
+
+Here's an example:
+
+```dockerfile
+RUN apt-get update && apt-get -y install \
+         automake \
+         build-essential \
+         cmake \
+         curl \
+         libssl-dev \
+         git \
+         libtool \
+         libunwind8 \
+         libunwind-setjmp0-dev \
+         python \
+         python2.7-dev \
+         python-software-properties \
+         software-properties-common \
+         python-setuptools \
+         unzip \
+         wget
+```
+
+### Step 4 --- An installation script for Java 11 and a `JAVA_HOME` 
environment variable
+
+Here's an example:
+
+```dockerfile
+RUN \
+     echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select 
true | debconf-set-selections && \
+     add-apt-repository -y ppa:webupd8team/java && \
+     apt-get update && \
+     apt-get install -y openjdk-11-jdk-headless && \
+     rm -rf /var/lib/apt/lists/*
+
+ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64
+```
+
+#### Step 5 - An installation script for [Bazel](http://bazel.io/) version {{% 
bazelVersion %}} or above.
+Here's an example:
+
+```dockerfile
+RUN wget -O /tmp/bazel.sh 
https://github.com/bazelbuild/bazel/releases/download/0.26.0/bazel-0.26.0-installer-linux-x86_64.sh
 \
+         && chmod +x /tmp/bazel.sh \
+         && /tmp/bazel.sh
+```
+
+### Step 6 --- Add the `bazelrc` configuration file for Bazel and the 
`compile.sh` script (from the `docker` folder) that compiles Heron
+
+```dockerfile
+ADD bazelrc /root/.bazelrc
+ADD compile.sh /compile.sh
+```
diff --git 
a/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-linux.md
 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-linux.md
new file mode 100644
index 0000000..03174a7
--- /dev/null
+++ 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-linux.md
@@ -0,0 +1,212 @@
+---
+id: version-0.20.2-incubating-rc2-compiling-linux
+title: Compiling on Linux
+sidebar_label: Compiling on Linux
+original_id: compiling-linux
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+Heron can currently be built on the following Linux platforms:
+
+* [Ubuntu 18.04](#building-on-ubuntu-18.04)
+* [CentOS 7](#building-on-centos-7)
+
+## Building on Ubuntu 18.04
+
+To build Heron on a fresh Ubuntu 18.04 installation:
+
+### Step 1 --- Update Ubuntu
+
+```bash
+$ sudo apt-get update -y
+$ sudo apt-get upgrade -y
+```
+
+### Step 2 --- Install required libraries
+
+```bash
+$ sudo apt-get install git build-essential automake cmake libtool-bin zip \
+  libunwind-setjmp0-dev zlib1g-dev unzip pkg-config python-setuptools -y
+```
+
+#### Step 3 --- Set the following environment variables
+
+```bash
+export CC=/usr/bin/gcc
+export CCX=/usr/bin/g++
+```
+
+### Step 4 --- Install JDK 11 and set JAVA_HOME
+
+```bash
+$ sudo add-apt-repository ppa:webupd8team/java
+$ sudo apt-get update -y
+$ sudo apt-get install openjdk-11-jdk-headless -y
+$ export JAVA_HOME="/usr/lib/jvm/java-11-openjdk-amd64"
+```
+
+#### Step 5 - Install Bazel {{% bazelVersion %}}
+
+```bash
+wget -O /tmp/bazel.sh 
https://github.com/bazelbuild/bazel/releases/download/0.26.0/bazel-0.26.0-installer-linux-x86_64.sh
+chmod +x /tmp/bazel.sh
+/tmp/bazel.sh --user
+```
+
+Make sure to download the appropriate version of Bazel (currently {{%
+bazelVersion %}}).
+
+### Step 6 --- Install python development tools
+```bash
+$ sudo apt-get install  python-dev python-pip
+```
+
+### Step 7 --- Make sure the Bazel executable is in your `PATH`
+
+```bash
+$ export PATH="$PATH:$HOME/bin"
+```
+
+### Step 8 --- Fetch the latest version of Heron's source code
+
+```bash
+$ git clone https://github.com/apache/incubator-heron.git && cd heron
+```
+
+### Step 9 --- Configure Heron for building with Bazel
+
+```bash
+$ ./bazel_configure.py
+```
+
+### Step 10 --- Build the project
+
+```bash
+$ bazel build --config=ubuntu heron/...
+```
+
+### Step 11 --- Build the packages
+
+```bash
+$ bazel build --config=ubuntu scripts/packages:binpkgs
+$ bazel build --config=ubuntu scripts/packages:tarpkgs
+```
+
+This will install Heron packages in the `bazel-bin/scripts/packages/` 
directory.
+
+## Manually Installing Libraries
+
+If you encounter errors with [libunwind](http://www.nongnu.org/libunwind), 
[libtool](https://www.gnu.org/software/libtool), or
+[gperftools](https://github.com/gperftools/gperftools/releases), we recommend
+installing them manually.
+
+### Compling and installing libtool
+
+```bash
+$ wget http://ftpmirror.gnu.org/libtool/libtool-2.4.6.tar.gz
+$ tar -xvf libtool-2.4.6.tar.gz
+$ cd libtool-2.4.6
+$ ./configure
+$ make
+$ sudo make install
+```
+
+### Compiling and installing libunwind
+
+```bash
+$ wget http://download.savannah.gnu.org/releases/libunwind/libunwind-1.1.tar.gz
+$ tar -xvf libunwind-1.1.tar.gz
+$ cd libunwind-1.1
+$ ./configure
+$ make
+$ sudo make install
+```
+
+### Compiling and installing gperftools
+
+```bash
+$ wget 
https://github.com/gperftools/gperftools/releases/download/gperftools-2.5/gperftools-2.5.tar.gz
+$ tar -xvf gperftools-2.5.tar.gz
+$ cd gperftools-2.5
+$ ./configure
+$ make
+$ sudo make install
+```
+
+## Building on CentOS 7
+
+To build Heron on a fresh CentOS 7 installation:
+
+### Step 1 --- Install the required dependencies
+
+```bash
+$ sudo yum install gcc gcc-c++ kernel-devel wget unzip zlib-devel zip git 
automake cmake patch libtool -y
+```
+
+### Step 2 --- Install libunwind from source
+
+```bash
+$ wget http://download.savannah.gnu.org/releases/libunwind/libunwind-1.1.tar.gz
+$ tar xvf libunwind-1.1.tar.gz
+$ cd libunwind-1.1
+$ ./configure
+$ make
+$ sudo make install
+```
+
+### Step 3 --- Set the following environment variables
+
+```bash
+$ export CC=/usr/bin/gcc
+$ export CCX=/usr/bin/g++
+```
+
+### Step 4 --- Install JDK 11
+
+```bash
+$ sudo yum install java-11-openjdk java-11-openjdk-devel
+$ export JAVA_HOME=/usr/lib/jvm/java-11-openjdk
+```
+
+#### Step 5 - Install Bazel {{% bazelVersion %}}
+
+```bash
+wget -O /tmp/bazel.sh 
https://github.com/bazelbuild/bazel/releases/download/0.26.0/bazel-0.26.0-installer-linux-x86_64.sh
+chmod +x /tmp/bazel.sh
+/tmp/bazel.sh --user
+```
+
+Make sure to download the appropriate version of Bazel (currently {{%
+bazelVersion %}}).
+
+### Step 6 --- Download Heron and compile it
+
+```bash
+$ git clone https://github.com/apache/incubator-heron.git && cd heron
+$ ./bazel_configure.py
+$ bazel build --config=centos heron/...
+```
+
+### Step 7 --- Build the binary packages
+
+```bash
+$ bazel build --config=centos scripts/packages:binpkgs
+$ bazel build --config=centos scripts/packages:tarpkgs
+```
+
+This will install Heron packages in the `bazel-bin/scripts/packages/` 
directory.
diff --git 
a/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-osx.md
 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-osx.md
new file mode 100644
index 0000000..388575c
--- /dev/null
+++ 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/compiling-osx.md
@@ -0,0 +1,87 @@
+---
+id: version-0.20.2-incubating-rc2-compiling-osx
+title: Compiling on OS X
+sidebar_label: Compiling on OS X
+original_id: compiling-osx
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+This is a step-by-step guide to building Heron on Mac OS X (versions 10.10 and
+  10.11).
+
+### Step 1 --- Install Homebrew
+
+If [Homebrew](http://brew.sh/) isn't yet installed on your system, you can
+install it using this one-liner:
+
+```bash
+$ /usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+```
+
+### Step 2 -- Install Bazel
+```bash
+wget -O /tmp/bazel.sh 
https://github.com/bazelbuild/bazel/releases/download/0.26.0/bazel-0.26.0-installer-darwin-x86_64.sh
+chmod +x /tmp/bazel.sh
+/tmp/bazel.sh --user
+```
+
+### Step 2 --- Install other required libraries
+
+```bash
+brew install automake
+brew install cmake
+brew install libtool
+```
+
+### Step 3 --- Set the following environment variables
+
+```bash
+$ export CC=/usr/bin/clang
+$ export CXX=/usr/bin/clang++
+$ echo $CC $CXX
+```
+
+### Step 4 --- Fetch the latest version of Heron's source code
+
+```bash
+$ git clone https://github.com/apache/incubator-heron.git && cd incubator-heron
+```
+
+### Step 5 --- Configure Heron for building with Bazel
+
+```bash
+$ ./bazel_configure.py
+```
+
+If this configure script fails with missing dependencies, Homebrew can be used
+to install those dependencies.
+
+### Step 6 --- Build the project
+
+```bash
+$ bazel build --config=darwin heron/...
+```
+
+### Step 7 --- Build the packages
+
+```bash
+$ bazel build --config=darwin scripts/packages:binpkgs
+$ bazel build --config=darwin scripts/packages:tarpkgs
+```
+
+This will install Heron packages in the `bazel-bin/scripts/packages/` 
directory.
diff --git 
a/website2/website/versioned_docs/version-0.20.2-incubating-rc2/heron-streamlet-concepts.md
 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/heron-streamlet-concepts.md
new file mode 100644
index 0000000..f427c7b
--- /dev/null
+++ 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/heron-streamlet-concepts.md
@@ -0,0 +1,811 @@
+---
+id: version-0.20.2-incubating-rc2-heron-streamlet-concepts
+title: Heron Streamlets
+sidebar_label: Heron Streamlets
+original_id: heron-streamlet-concepts
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+When it was first released, Heron offered a **Topology API**---heavily 
indebted to the [Storm 
API](http://storm.apache.org/about/simple-api.html)---for developing topology 
logic. In the original Topology API, developers creating topologies were 
required to explicitly:
+
+* define the behavior of every 
[spout](topology-development-topology-api-java#spouts) and 
[bolt](topology-development-topology-api-java#bolts) in the topology 
+* specify how those spouts and bolts are meant to be interconnected
+
+### Problems with the Topology API
+
+Although the Storm-inspired API provided a powerful low-level interface for 
creating topologies, the spouts-and-bolts model also presented a variety of 
drawbacks for Heron developers:
+
+Drawback | Description
+:--------|:-----------
+Verbosity | In the original Topology API for both Java and Python, creating 
spouts and bolts required substantial boilerplate and forced developers to both 
provide implementations for spout and bolt classes and also to specify the 
connections between those spouts and bolts.
+Difficult debugging | When spouts, bolts, and the connections between them 
need to be created "by hand," it can be challenging to trace the origin of 
problems in the topology's processing chain
+Tuple-based data model | In the older topology API, spouts and bolts passed 
[tuples](https://en.wikipedia.org/wiki/Tuple) and nothing but tuples within 
topologies. Although tuples are a powerful and flexible data type, the topology 
API forced *all* spouts and bolts to implement their own 
serialization/deserialization logic.
+
+### Advantages of the Streamlet API
+
+In contrast with the Topology API, the Heron Streamlet API offers:
+
+Advantage | Description
+:---------|:-----------
+Boilerplate-free code | Instead of needing to implement spout and bolt classes 
over and over again, the Heron Streamlet API enables you to create stream 
processing logic out of functions, such as map, flatMap, join, and filter 
functions, instead.
+Easy debugging | With the Heron Streamlet API, you don't have to worry about 
spouts and bolts, which means that you can more easily surface problems with 
your processing logic.
+Completely flexible, type-safe data model | Instead of requiring that all 
processing components pass tuples to one another (which implicitly requires 
serialization to and deserializaton from your application-specific types), the 
Heron Streamlet API enables you to write your processing logic in accordance 
with whatever types you'd like---including tuples, if you wish.<br /><br />In 
the Streamlet API for [Java](topology-development-streamlet-api), all 
streamlets are typed (e.g. `Streamlet< [...]
+
+## Streamlet API topology model
+
+Instead of spouts and bolts, as with the Topology API, the Streamlet API 
enables you to create **processing graphs** that are then automatically 
converted to spouts and bolts under the hood. Processing graphs consist of the 
following components:
+
+* **Sources** supply the processing graph with data from random generators, 
databases, web service APIs, filesystems, pub-sub messaging systems, or 
anything that implements the [source](#source-operations) interface.
+* **Operators** supply the graph's processing logic, operating on data passed 
into the graph by sources.
+* **Sinks** are the terminal endpoints of the processing graph, determining 
what the graph *does* with the processed data. Sinks can involve storing data 
in a database, logging results to stdout, publishing messages to a topic in a 
pub-sub messaging system, and much more.
+
+The diagram below illustrates both the general model (with a single source, 
three operators, and one sink), and a more concrete example that includes two 
sources (an [Apache Pulsar](https://pulsar.incubator.apache.org) topic and the 
[Twitter API](https://developer.twitter.com/en/docs)), three operators (a 
[join](#join-operations), [flatMap](#flatmap-operations), and 
[reduce](#reduce-operations) operation), and two [sinks](#sink-operations) (an 
[Apache Cassandra](http://cassandra.apache.o [...]
+
+![Topology 
Operators](https://www.lucidchart.com/publicSegments/view/d84026a1-d12e-4878-b8d5-5aa274ec0415/image.png)
+
+### Streamlets
+
+The core construct underlying the Heron Streamlet API is that of the 
**streamlet**. A streamlet is an unbounded, ordered collection of **elements** 
of some data type (streamlets can consist of simple types like integers and 
strings or more complex, application-specific data types).
+
+**Source streamlets** supply a Heron processing graph with data inputs. These 
inputs can come from a wide variety of sources, such as pub-sub messaging 
systems like [Apache
+Kafka](http://kafka.apache.org/) and [Apache 
Pulsar](https://pulsar.incubator.apache.org) (incubating), random generators, 
or static files like CSV or [Apache Parquet](https://parquet.apache.org/) files.
+
+Source streamlets can then be manipulated in a wide variety of ways. You can, 
for example:
+
+* apply [map](#map-operations), [filter](#filter-operations), 
[flatMap](#flatmap-operations), and many other operations to them
+* apply operations, such as [join](#join-operations) and 
[union](#union-operations) operations, that combine streamlets together
+* [reduce](#reduce-by-key-and-window-operations) all elements in a streamlet 
to some single value, based on key
+* send data to [sinks](#sink-operations) (store elements)
+
+The diagram below shows an example streamlet:
+
+![Streamlet](https://www.lucidchart.com/publicSegments/view/5c451e53-46f8-4e36-86f4-9a11ca015c21/image.png)
+
+
+In this diagram, the **source streamlet** is produced by a random generator 
that continuously emits random integers between 1 and 100. From there:
+
+* A filter operation is applied to the source streamlet that filters out all 
values less than or equal to 30
+* A *new streamlet* is produced by the filter operation (with the Heron 
Streamlet API, you're always transforming streamlets into other streamlets)
+* A map operation adds 15 to each item in the streamlet, which produces the 
final streamlet in our graph. We *could* hypothetically go much further and add 
as many transformation steps to the graph as we'd like.
+* Once the final desired streamlet is created, each item in the streamlet is 
sent to a sink. Sinks are where items leave the processing graph. 
+
+### Supported languages
+
+The Heron Streamlet API is currently available for:
+
+* [Java](topology-development-streamlet-api)
+* [Scala](topology-development-streamlet-scala)
+
+### The Heron Streamlet API and topologies
+
+With the Heron Streamlet API *you still create topologies*, but only 
implicitly. Heron automatically performs the heavy lifting of converting the 
streamlet-based processing logic that you create into spouts and bolts and, 
from there, into containers that are then deployed using whichever 
[scheduler](schedulers-local.md) your Heron cluster relies upon.
+
+From the standpoint of both operators and developers [managing topologies' 
lifecycles](#topology-lifecycle), the resulting topologies are equivalent. From 
a development workflow standpoint, however, the difference is profound. You can 
think of the Streamlet API as a highly convenient tool for creating spouts, 
bolts, and the logic that connects them.
+
+The basic workflow looks like this:
+
+![Streamlet](https://www.lucidchart.com/publicSegments/view/6b2e9b49-ef1f-45c9-8094-1e2cefbaed7b/image.png)
+
+When creating topologies using the Heron Streamlet API, you simply write code 
(example [below](#java-processing-graph-example)) in a highly functional style. 
From there:
+
+* that code is automatically converted into spouts, bolts, and the necessary 
connective logic between spouts and bolts
+* the spouts and bolts are automatically converted into a [logical 
plan](topology-development-topology-api-java#logical-plan) that specifies how 
the spouts and bolts are connected to each other
+* the logical plan is automatically converted into a [physical 
plan](topology-development-topology-api-java#physical-plan) that determines how 
the spout and bolt instances (the colored boxes above) are distributed across 
the specified number of containers (in this case two)
+
+With a physical plan in place, the Streamlet API topology can be submitted to 
a Heron cluster.
+
+#### Java processing graph example
+
+The code below shows how you could implement the processing graph shown 
[above](#streamlets) in Java:
+
+```java
+import java.util.concurrent.ThreadLocalRandom;
+
+import org.apache.heron.streamlet.Builder;
+import org.apache.heron.streamlet.Config;
+import org.apache.heron.streamlet.Runner;
+
+Builder builder = Builder.newBuilder();
+
+// Function for generating random integers
+int randomInt(int lower, int upper) {
+    return ThreadLocalRandom.current().nextInt(lower, upper + 1);
+}
+
+// Source streamlet
+builder.newSource(() -> randomInt(1, 100))
+    // Filter operation
+    .filter(i -> i > 30)
+    // Map operation
+    .map(i -> i + 15)
+    // Log sink
+    .log();
+
+Config config = new Config();
+// This topology will be spread across two containers
+config.setNumContainers(2);
+
+// Submit the processing graph to Heron as a topology
+new Runner("IntegerProcessingGraph", config, builder).run();
+```
+
+As you can see, the Java code for the example streamlet processing graph 
requires very little boilerplate and is heavily indebted to Java 
[lambda](https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html)
 patterns.
+
+## Streamlet operations
+
+In the Heron Streamlet API, processing data means *transforming streamlets 
into other streamlets*. This can be done using a wide variety of available 
operations, including many that you may be familiar with from functional 
programming:
+
+Operation | Description
+:---------|:-----------
+[map](#map-operations) | Returns a new streamlet by applying the supplied 
mapping function to each element in the original streamlet
+[flatMap](#flatMap-operations) | Like a map operation but with the important 
difference that each element of the streamlet is flattened into a collection 
type
+[filter](#filter-operations) | Returns a new streamlet containing only the 
elements that satisfy the supplied filtering function
+[union](#filter-operations) | Unifies two streamlets into one, without 
[windowing](#windowing) or modifying the elements of the two streamlets
+[clone](#clone-operations) | Creates any number of identical copies of a 
streamlet
+[transform](#transform-operations) | Transform a streamlet using whichever 
logic you'd like (useful for transformations that don't neatly map onto the 
available operations) | Modify the elements from an incoming streamlet and 
update the topology's state
+[keyBy](#key-by-operations) | Returns a new key-value streamlet by applying 
the supplied extractors to each element in the original streamlet
+[reduceByKey](#reduce-by-key-operations) | Produces a streamlet of key-value 
on each key and in accordance with a reduce function that you apply to all the 
accumulated values
+[reduceByKeyAndWindow](#reduce-by-key-and-window-operations) |  Produces a 
streamlet of key-value on each key, within a [time window](#windowing), and in 
accordance with a reduce function that you apply to all the accumulated values
+[countByKey](#count-by-key-operations) | A special reduce operation of 
counting number of tuples on each key
+[countByKeyAndWindow](#count-by-key-and-window-operations) | A special reduce 
operation of counting number of tuples on each key, within a [time 
window](#windowing)
+[split](#split-operations) | Split a streamlet into multiple streamlets with 
different id.
+[withStream](#with-stream-operations) | Select a stream with id from a 
streamlet that contains multiple streams
+[applyOperator](#apply-operator-operations) | Returns a new streamlet by 
applying an user defined operator to the original streamlet
+[join](#join-operations) | Joins two separate key-value streamlets into a 
single streamlet on a key, within a [time window](#windowing), and in 
accordance with a join function
+[log](#log-operations) | Logs the final streamlet output of the processing 
graph to stdout
+[toSink](#sink-operations) | Sink operations terminate the processing graph by 
storing elements in a database, logging elements to stdout, etc.
+[consume](#consume-operations) | Consume operations are like sink operations 
except they don't require implementing a full sink interface (consume 
operations are thus suited for simple operations like logging)
+
+### Map operations
+
+Map operations create a new streamlet by applying the supplied mapping 
function to each element in the original streamlet.
+
+#### Java example
+
+```java
+import org.apache.heron.streamlet.Builder;
+
+Builder processingGraphBuilder = Builder.newBuilder();
+
+Streamlet<Integer> ones = processingGraphBuilder.newSource(() -> 1);
+Streamlet<Integer> thirteens = ones.map(i -> i + 12);
+```
+
+In this example, a supplier streamlet emits an indefinite series of 1s. The 
`map` operation then adds 12 to each incoming element, producing a streamlet of 
13s. The effect of this operation is to transform the `Streamlet<Integer>` into 
a `Streamlet<Integer>` with different values (map operations can also convert 
streamlets into streamlets of a different type).
+
+### FlatMap operations
+
+FlatMap operations are like [map operations](#map-operations) but with the 
important difference that each element of the streamlet is "flattened" into a 
collection type. In the Java example below, a supplier streamlet emits the same 
sentence over and over again; the `flatMap` operation transforms each sentence 
into a Java `List` of individual words.
+
+#### Java example
+
+```java
+Streamlet<String> sentences = builder.newSource(() -> "I have nothing to 
declare but my genius");
+Streamlet<List<String>> words = sentences
+        .flatMap((sentence) -> Arrays.asList(sentence.split("\\s+")));
+```
+
+The effect of this operation is to transform the `Streamlet<String>` into a 
`Streamlet<List<String>>` containing each word emitted by the source streamlet.
+
+### Filter operations
+
+Filter operations retain some elements in a streamlet and exclude other 
elements on the basis of a provided filtering function.
+
+#### Java example
+
+```java
+Streamlet<Integer> randomInts =
+    builder.newSource(() -> ThreadLocalRandom.current().nextInt(1, 11));
+Streamlet<Integer> lessThanSeven = randomInts
+        .filter(i -> i <= 7);
+```
+
+In this example, a source streamlet consisting of random integers between 1 
and 10 is modified by a filter operation that removes all streamlet elements 
that are greater than 7.
+
+### Union operations
+
+Union operations combine two streamlets of the same type into a single 
streamlet without modifying the elements.
+
+#### Java example
+
+```java
+Streamlet<String> oohs = builder.newSource(() -> "ooh");
+Streamlet<String> aahs = builder.newSource(() -> "aah");
+
+Streamlet<String> combined = oohs
+        .union(aahs);
+```
+
+Here, one streamlet is an endless series of "ooh"s while the other is an 
endless series of "aah"s. The `union` operation combines them into a single 
streamlet of alternating "ooh"s and "aah"s.
+
+### Clone operations
+
+Clone operations enable you to create any number of "copies" of a streamlet. 
Each of the "copy" streamlets contains all the elements of the original and can 
be manipulated just like the original streamlet.
+
+#### Java example
+
+```java
+import java.util.List;
+import java.util.concurrent.ThreadLocalRandom;
+
+Streamlet<Integer> integers = builder.newSource(() -> 
ThreadLocalRandom.current().nextInt(100));
+
+List<Streamlet<Integer>> copies = integers.clone(5);
+Streamlet<Integer> ints1 = copies.get(0);
+Streamlet<Integer> ints2 = copies.get(1);
+Streamlet<Integer> ints3 = copies.get(2);
+// and so on...
+```
+
+In this example, a streamlet of random integers between 1 and 100 is split 
into 5 identical streamlets.
+
+### Transform operations
+
+Transform operations are highly flexible operations that are most useful for:
+
+* operations involving state in [stateful 
topologies](heron-delivery-semantics#stateful-topologies)
+* operations that don't neatly fit into the other categories or into a 
lambda-based logic
+
+Transform operations require you to implement three different methods:
+
+* A `setup` method that enables you to pass a context object to the operation 
and to specify what happens prior to the `transform` step
+* A `transform` operation that performs the desired transformation
+* A `cleanup` method that allows you to specify what happens after the 
`transform` step
+
+The context object available to a transform operation provides access to:
+
+* the current state of the topology
+* the topology's configuration
+* the name of the stream
+* the stream partition
+* the current task ID
+
+Here's a Java example of a transform operation in a topology where a stateful 
record is kept of the number of items processed:
+
+```java
+import org.apache.heron.streamlet.Context;
+import org.apache.heron.streamlet.SerializableTransformer;
+
+import java.util.function.Consumer;
+
+public class CountNumberOfItems implements SerializableTransformer<String, 
String> {
+    private int numberOfItems;
+
+    public void setup(Context context) {
+        numberOfItems = (int) context.getState("number-of-items");
+        context.getState().put("number-of-items", numberOfItems + 1);
+    }
+
+    public void transform(String in, Consumer<String> consumer) {
+        String transformedString = // Apply some operation to the incoming 
value
+        consumer.accept(transformedString);
+    }
+
+    public void cleanup() {
+        System.out.println(
+                String.format("Successfully processed new state: %d", 
numberOfItems));
+    }
+}
+```
+
+This operation does a few things:
+
+* In the `setup` method, the 
[`Context`](/api/java/org/apache/heron/streamlet/Context.html) object is used 
to access the current state (which has the semantics of a Java `Map`). The 
current number of items processed is incremented by one and then saved as the 
new state.
+* In the `transform` method, the incoming string is transformed in some way 
and then "accepted" as the new value.
+* In the `cleanup` step, the current count of items processed is logged.
+
+Here's that operation within the context of a streamlet processing graph:
+
+```java
+builder.newSource(() -> "Some string over and over");
+        .transform(new CountNumberOfItems())
+        .log();
+```
+
+### Key by operations
+
+Key by operations convert each item in the original streamlet into a key-value 
pair and return a new streamlet.
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .keyBy(
+        // Key extractor (in this case, each word acts as the key)
+        word -> word,
+        // Value extractor (get the length of each word)
+        word -> workd.length()
+    )
+    // The result is logged
+    .log();
+```
+
+### Reduce by key operations
+
+You can apply 
[reduce](https://docs.oracle.com/javase/tutorial/collections/streams/reduction.html)
 operations to streamlets by specifying:
+
+* a key extractor that determines what counts as the key for the streamlet
+* a value extractor that determines which final value is chosen for each 
element of the streamlet
+* a reduce function that produces a single value for each key in the streamlet
+
+Reduce by key operations produce a new streamlet of key-value window objects 
(which include a key-value pair including the extracted key and calculated 
value).
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .reduceByKeyAndWindow(
+        // Key extractor (in this case, each word acts as the key)
+        word -> word,
+        // Value extractor (each word appears only once, hence the value is 
always 1)
+        word -> 1,
+        // Reduce operation (a running sum)
+        (x, y) -> x + y
+    )
+    // The result is logged
+    .log();
+```
+
+### Reduce by key and window operations
+
+You can apply 
[reduce](https://docs.oracle.com/javase/tutorial/collections/streams/reduction.html)
 operations to streamlets by specifying:
+
+* a key extractor that determines what counts as the key for the streamlet
+* a value extractor that determines which final value is chosen for each 
element of the streamlet
+* a [time window](heron-topology-concepts#window-operations) across which the 
operation will take place
+* a reduce function that produces a single value for each key in the streamlet
+
+Reduce by key and window operations produce a new streamlet of key-value 
window objects (which include a key-value pair including the extracted key and 
calculated value, as well as information about the window in which the 
operation took place).
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+import org.apache.heron.streamlet.WindowConfig;
+
+Builder builder = Builder.newBuilder();
+
+builder.newSource(() -> "Mary had a little lamb")
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .reduceByKeyAndWindow(
+        // Key extractor (in this case, each word acts as the key)
+        word -> word,
+        // Value extractor (each word appears only once, hence the value is 
always 1)
+        word -> 1,
+        // Window configuration
+        WindowConfig.TumblingCountWindow(50),
+        // Reduce operation (a running sum)
+        (x, y) -> x + y
+    )
+    // The result is logged
+    .log();
+```
+
+### Count by key operations
+
+Count by key operations extract keys from data in the original streamlet and 
count the number of times a key has been encountered.
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .countByKeyAndWindow(word -> word)
+    // The result is logged
+    .log();
+```
+
+### Count by key and window operations
+
+Count by key and window operations extract keys from data in the original 
streamlet and count the number of times a key has been encountered within each 
[time window](#windowing).
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+import org.apache.heron.streamlet.WindowConfig;
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .countByKeyAndWindow(
+        // Key extractor (in this case, each word acts as the key)
+        word -> word,
+        // Window configuration
+        WindowConfig.TumblingCountWindow(50),
+    )
+    // The result is logged
+    .log();
+```
+
+### Split operations
+
+Split operations split a streamlet into multiple streamlets with different id 
by getting the corresponding stream ids from each item in the origina streamlet.
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+Map<String, SerializablePredicate<String>> splitter = new HashMap();
+    splitter.put("long_word", s -> s.length() >= 4);
+    splitter.put("short_word", s -> s.length() < 4);
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    // Splits the stream into streams of long and short words
+    .split(splitter)
+    // Choose the stream of the short words
+    .withStream("short_word")
+    // The result is logged
+    .log();
+```
+
+### With stream operations
+
+With stream operations select a stream with id from a streamlet that contains 
multiple streams. They are often used with [split](#split-operations).
+
+### Apply operator operations
+
+Apply operator operations apply a user defined operator (like a bolt) to each 
element of the original streamlet and return a new streamlet.
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+private class MyBoltOperator extends MyBolt implements 
IStreamletRichOperator<Double, Double> {
+}
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    // Apply user defined operation
+    .applyOperator(new MyBoltOperator())
+    // The result is logged
+    .log();
+```
+
+### Join operations
+
+Join operations in the Streamlet API take two streamlets (a "left" and a 
"right" streamlet) and join them together:
+
+* based on a key extractor for each streamlet
+* over key-value elements accumulated during a specified [time 
window](#windowing)
+* based on a [join type](#join-types) ([inner](#inner-joins), [outer 
left](#outer-left-joins), [outer right](#outer-right-joins), or 
[outer](#outer-joins))
+* using a join function that specifies *how* values will be processed
+
+You may already be familiar with `JOIN` operations in SQL databases, like this:
+
+```sql
+SELECT username, email
+FROM all_users
+INNER JOIN banned_users ON all_users.username NOT IN banned_users.username;
+```
+
+> If you'd like to unite two streamlets into one *without* applying a window 
or a join function, you can use a [union](#union-operations) operation, which 
are available for key-value streamlets as well as normal streamlets.
+
+All join operations are performed:
+
+1. Over elements accumulated during a specified [time window](#windowing)
+1. In accordance with a key and value extracted from each streamlet element 
(you must provide extractor functions for both)
+1. In accordance with a join function that produces a "joined" value for each 
pair of streamlet elements
+
+#### Join types
+
+The Heron Streamlet API supports four types of joins:
+
+Type | What the join operation yields | Default?
+:----|:-------------------------------|:--------
+[Inner](#inner-joins) | All key-values with matched keys across the left and 
right stream | Yes
+[Outer left](#outer-left-joins) | All key-values with matched keys across both 
streams plus unmatched keys in the left stream |
+[Outer right](#outer-right-joins) | All key-values with matched keys across 
both streams plus unmatched keys in the left stream |
+[Outer](#outer-joins) | All key-values across both the left and right stream, 
regardless of whether or not any given element has a matching key in the other 
stream |
+
+#### Inner joins
+
+Inner joins operate over the [Cartesian 
product](https://en.wikipedia.org/wiki/Cartesian_product) of the left stream 
and the right stream, i.e. over all the whole set of all ordered pairs between 
the two streams. Imagine this set of key-value pairs accumulated within a time 
window:
+
+Left streamlet | Right streamlet
+:--------------|:---------------
+("player1", 4) | ("player1", 10)
+("player1", 5) | ("player1", 12)
+("player1", 17) | ("player2", 27)
+
+An inner join operation would thus apply the join function to all key-values 
with matching keys, thus **3 &times; 2 = 6** in total, producing this set of 
key-values:
+
+Included key-values |
+:-------------------|
+("player1", 4) |
+("player1", 5) |
+("player1", 10) |
+("player1", 12) |
+("player1", 17) |
+
+> Note that the `("player2", 27)` key-value pair was *not* included in the 
stream because there's no matching key-value in the left streamlet.
+
+If the supplied join function, say, added the values together, then the 
resulting joined stream would look like this:
+
+Operation | Joined Streamlet
+:---------|:----------------
+4 + 10 | ("player1", 14)
+4 + 12 | ("player1", 16)
+5 + 10 | ("player1", 15)
+5 + 12 | ("player1", 17)
+17 + 10 | ("player1", 27)
+17 + 12 | ("player1", 29)
+
+> Inner joins are the "default" join type in the Heron Streamlet API. If you 
call the `join` method without specifying a join type, an inner join will be 
applied.
+
+##### Java example
+
+```java
+class Score {
+    String playerUsername;
+    int playerScore;
+
+    // Setters and getters
+}
+
+Streamlet<Score> scores1 = /* A stream of player scores */;
+Streamlet<Score> scores2 = /* A second stream of player scores */;
+
+scores1
+    .join(
+        scores2,
+        // Key extractor for the left stream (scores1)
+        score -> score.getPlayerUsername(),
+        // Key extractor for the right stream (scores2)
+        score -> score.getPlayerScore(),
+        // Window configuration
+        WindowConfig.TumblingCountWindow(50),
+        // Join function (selects the larger score as the value using
+        // using a ternary operator)
+        (x, y) ->
+            (x.getPlayerScore() >= y.getPlayerScore()) ?
+                x.getPlayerScore() :
+                y.getPlayerScore()
+    )
+    .log();
+```
+
+In this example, two streamlets consisting of `Score` objects are joined. In 
the `join` function, a key and value extractor are supplied along with a window 
configuration and a join function. The resulting, joined streamlet will consist 
of key-value pairs in which each player's username will be the key and the 
joined---in this case highest---score will be the value.
+
+By default, an [inner join](#inner-joins) is applied in join operations but 
you can also specify a different join type. Here's a Java example for an [outer 
right](#outer-right-joins) join:
+
+```java
+import org.apache.heron.streamlet.JoinType;
+
+scores1
+    .join(
+        scores2,
+        // Key extractor for the left stream (scores1)
+        score -> score.getPlayerUsername(),
+        // Key extractor for the right stream (scores2)
+        score -> score.getPlayerScore(),
+        // Window configuration
+        WindowConfig.TumblingCountWindow(50),
+        // Join type
+        JoinType.OUTER_RIGHT,
+        // Join function (selects the larger score as the value using
+        // using a ternary operator)
+        (x, y) ->
+            (x.getPlayerScore() >= y.getPlayerScore()) ?
+                x.getPlayerScore() :
+                y.getPlayerScore()
+    )
+    .log();
+```
+
+#### Outer left joins
+
+An outer left join includes the results of an [inner join](#inner-joins) 
*plus* all of the unmatched keys in the left stream. Take this example left and 
right streamlet:
+
+Left streamlet | Right streamlet
+:--------------|:---------------
+("player1", 4) | ("player1", 10)
+("player2", 5) | ("player4", 12)
+("player3", 17) |
+
+The resulting set of key-values within the time window:
+
+Included key-values |
+:-------------------|
+("player1", 4) |
+("player1", 10) |
+("player2", 5) |
+("player3", 17) |
+
+In this case, key-values with a key of `player4` are excluded because they are 
in the right stream but have no matching key with any element in the left 
stream.
+
+#### Outer right joins
+
+An outer right join includes the results of an [inner join](#inner-joins) 
*plus* all of the unmatched keys in the right stream. Take this example left 
and right streamlet (from [above](#outer-left-joins)):
+
+Left streamlet | Right streamlet
+:--------------|:---------------
+("player1", 4) | ("player1", 10)
+("player2", 5) | ("player4", 12)
+("player3", 17) |
+
+The resulting set of key-values within the time window:
+
+Included key-values |
+:-------------------|
+("player1", 4) |
+("player1", 10) |
+("player2", 5) |
+("player4", 17) |
+
+In this case, key-values with a key of `player3` are excluded because they are 
in the left stream but have no matching key with any element in the right 
stream.
+
+#### Outer joins
+
+Outer joins include *all* key-values across both the left and right stream, 
regardless of whether or not any given element has a matching key in the other 
stream. If you want to ensure that no element is left out of a resulting joined 
streamlet, use an outer join. Take this example left and right streamlet (from 
[above](#outer-left-joins)):
+
+Left streamlet | Right streamlet
+:--------------|:---------------
+("player1", 4) | ("player1", 10)
+("player2", 5) | ("player4", 12)
+("player3", 17) |
+
+The resulting set of key-values within the time window:
+
+Included key-values |
+:-------------------|
+("player1", 4)
+("player1", 10)
+("player2", 5)
+("player4", 12)
+("player3", 17)
+
+> Note that *all* key-values were indiscriminately included in the joined set.
+
+### Sink operations
+
+In processing graphs like the ones you build using the Heron Streamlet API, 
**sinks** are essentially the terminal points in your graph, where your 
processing logic comes to an end. A processing graph can end with writing to a 
database, publishing to a topic in a pub-sub messaging system, and so on. With 
the Streamlet API, you can implement your own custom sinks.
+
+#### Java example
+
+```java
+import org.apache.heron.streamlet.Context;
+import org.apache.heron.streamlet.Sink;
+
+public class FormattedLogSink implements Sink<T> {
+    private String streamletName;
+
+    public void setup(Context context) {
+        streamletName = context.getStreamletName();
+    }
+
+    public void put(T element) {
+        String message = String.format("Streamlet %s has produced an element 
with a value of: '%s'",
+                streamletName,
+                element.toString());
+        System.out.println(message);
+    }
+
+    public void cleanup() {}
+}
+```
+
+In this example, the sink fetches the name of the enclosing streamlet from the 
context passed in the `setup` method. The `put` method specifies how the sink 
handles each element that is received (in this case, a formatted message is 
logged to stdout). The `cleanup` method enables you to specify what happens 
after the element has been processed by the sink.
+
+Here is the `FormattedLogSink` at work in an example processing graph:
+
+```java
+Builder builder = Builder.newBuilder();
+
+builder.newSource(() -> "Here is a string to be passed to the sink")
+        .toSink(new FormattedLogSink());
+```
+
+> [Log operations](#log-operations) rely on a log sink that is provided out of 
the box. You'll need to implement other sinks yourself.
+
+### Consume operations
+
+Consume operations are like [sink operations](#sink-operations) except they 
don't require implementing a full sink interface. Consume operations are thus 
suited for simple operations like formatted logging.
+
+#### Java example
+
+```java
+Builder builder = Builder.newBuilder()
+        .newSource(() -> generateRandomInteger())
+        .filter(i -> i % 2 == 0)
+        .consume(i -> {
+            String message = String.format("Even number found: %d", i);
+            System.out.println(message);
+        });
+```
+
+## Partitioning
+
+In the topology API, processing parallelism can be managed via adjusting the 
number of spouts and bolts performing different operations, enabling you to, 
for example, increase the relative parallelism of a bolt by using three of that 
bolt instead of two.
+
+The Heron Streamlet API provides a different mechanism for controlling 
parallelism: **partitioning**. To understand partitioning, keep in mind that 
rather than physical spouts and bolts, the core processing construct in the 
Heron Streamlet API is the processing step. With the Heron Streamlet API, you 
can explicitly assign a number of partitions to each processing step in your 
graph (the default is one partition).
+
+The example topology [above](#streamlets), for example, has five steps:
+
+* the random integer source
+* the "add one" map operation
+* the union operation
+* the filtering operation
+* the logging operation.
+
+You could apply varying numbers of partitions to each step in that topology 
like this:
+
+```java
+Builder builder = Builder.newBuilder();
+
+Streamlet<Integer> zeroes = builder.newSource(() -> 0)
+        .setName("zeroes");
+
+builder.newSource(() -> ThreadLocalRandom.current().nextInt(1, 11))
+        .setName("random-ints")
+        .setNumPartitions(3)
+        .map(i -> i + 1)
+        .setName("add-one")
+        .repartition(3)
+        .union(zeroes)
+        .setName("unify-streams")
+        .repartition(2)
+        .filter(i -> i != 2)
+        .setName("remove-all-twos")
+        .repartition(1)
+        .log();
+```
+
+### Repartition operations
+
+As explained [above](#partitioning), when you set a number of partitions for a 
specific operation (included for source streamlets), the same number of 
partitions is applied to all downstream operations *until* a different number 
is explicitly set.
+
+```java
+import java.util.Arrays;
+
+Builder builder = Builder.newBuilder();
+
+builder.newSource(() -> ThreadLocalRandom.current().nextInt(1, 11))
+    .repartition(4, (element, numPartitions) -> {
+        if (element > 5) {
+            return Arrays.asList(0, 1);
+        } else {
+            return Arrays.asList(2, 3);
+        }
+    });
+```
+
diff --git 
a/website2/website/versioned_docs/version-0.20.2-incubating-rc2/schedulers-k8s-with-helm.md
 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/schedulers-k8s-with-helm.md
new file mode 100644
index 0000000..a94c15a
--- /dev/null
+++ 
b/website2/website/versioned_docs/version-0.20.2-incubating-rc2/schedulers-k8s-with-helm.md
@@ -0,0 +1,304 @@
+---
+id: version-0.20.2-incubating-rc2-schedulers-k8s-with-helm
+title: Kubernetes with Helm
+sidebar_label: Kubernetes with Helm
+original_id: schedulers-k8s-with-helm
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+> If you'd prefer to install Heron on Kubernetes *without* using the 
[Helm](https://helm.sh) package manager, see the [Heron on Kubernetes by 
hand](schedulers-k8s-by-hand) document.
+
+[Helm](https://helm.sh) is an open source package manager for 
[Kubernetes](https://kubernetes.io) that enables you to quickly and easily 
install even the most complex software systems on Kubernetes. Heron has a Helm 
[chart](https://docs.helm.sh/developing_charts/#charts) that you can use to 
install Heron on Kubernetes using just a few commands. The chart can be used to 
install Heron on the following platforms:
+
+* [Minikube](#minikube) (the default)
+* [Google Kubernetes Engine](#google-kubernetes-engine)
+* [Amazon Web Services](#amazon-web-services)
+* [Bare metal](#bare-metal)
+
+## Requirements
+
+In order to install Heron on Kubernetes using Helm, you'll need to have an 
existing Kubernetes cluster on one of the supported 
[platforms](#specifying-a-platform) (which includes [bare metal](#bare-metal) 
installations).
+
+## Installing the Helm client
+
+In order to get started, you need to install Helm on your machine. 
Installation instructions for [macOS](#helm-for-macos) and 
[Linux](#helm-for-linux) are below.
+
+### Helm for macOS
+
+You can install Helm on macOS using [Homebrew](https://brew.sh):
+
+```bash
+$ brew install kubernetes-helm
+```
+
+### Helm for Linux
+
+You can install Helm on Linux using a simple installation script:
+
+```bash
+$ curl -fsSL -o get_helm.sh 
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
+$ chmod 700 get_helm.sh
+$ ./get_helm.sh
+```
+
+## Installing Heron on Kubernetes
+
+To use Helm with Kubernetes, you need to first make sure that 
[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) is using the 
right configuration context for your cluster. To check which context is being 
used:
+
+```bash
+$ kubectl config current-context
+```
+
+Once you've installed the Helm client on your machine and gotten Helm pointing 
to your Kubernetes cluster, you need to make your client aware of the 
`heron-charts` Helm repository, which houses the chart for Heron:
+
+```bash
+$ helm repo add heron-charts https://storage.googleapis.com/heron-charts
+"heron-charts" has been added to your repositories
+```
+
+Create a namespace to install into:
+
+```bash
+$ kubectl create namespace heron
+```
+
+Now you can install the Heron package:
+
+```bash
+$ helm install heron-charts/heron -g -n heron
+```
+
+This will install Heron and provide the installation in the `heron` namespace 
(`-n`) with a random name (`-g`) like `jazzy-anaconda`. To provide the 
installation with a name, such as `heron-kube`:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron
+```
+
+### Specifying a platform
+
+The default platform for running Heron on Kubernetes is [Minikube](#minikube). 
To specify a different platform, you can use the `--set platform=PLATFORM` 
flag. Here's an example:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=gke
+```
+
+The available platforms are:
+
+Platform | Tag
+:--------|:---
+[Minikube](#minikube) | `minikube`
+[Google Kubernetes Engine](#google-kubernetes-engine) | `gke`
+[Amazon Web Services](#amazon-web-services) | `aws`
+[Bare metal](#bare-metal) | `baremetal`
+
+#### Minikube
+
+To run Heron on Minikube, you need to first [install 
Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/). Once 
Minikube is installed, you can start it by running `minikube start`. Please 
note, however, that Heron currently requires the following resources:
+
+* 7 GB of memory
+* 5 CPUs
+* 20 GB of disk space
+
+To start up Minikube with the minimum necessary resources:
+
+```bash
+$ minikube start \
+  --memory=7168 \
+  --cpus=5 \
+  --disk-size=20g
+```
+
+Once Minikube is running, you can then install Heron in one of two ways:
+
+Create a namespace to install into:
+
+```bash
+$ kubectl create namespace heron
+```
+
+```bash
+# Use the Minikube default
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron
+
+# Explicitly select Minikube
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=minikube
+```
+
+#### Google Kubernetes Engine
+
+The resources required to run Heron on [Google Kubernetes 
Engine](https://cloud.google.com/kubernetes-engine/) vary based on your use 
case. To run a basic Heron cluster intended for development and 
experimentation, you'll need at least:
+
+* 3 nodes
+* 
[n1-standard-4](https://cloud.google.com/compute/docs/machine-types#standard_machine_types)
 machines
+
+To create a cluster with those resources using the 
[gcloud](https://cloud.google.com/sdk/gcloud/) tool:
+
+```bash
+$ gcloud container clusters create heron-gke-dev-cluster \
+  --num-nodes=3 \
+  --machine-type=n1-standard-2
+```
+
+For a production-ready cluster you'll want a larger cluster with:
+
+* *at least* 8 nodes
+* [n1-standard-4 or 
n1-standard-8](https://cloud.google.com/compute/docs/machine-types#standard_machine_types)
 machines (preferably the latter)
+
+To create such a cluster:
+
+```bash
+$ gcloud container clusters create heron-gke-prod-cluster \
+  --num-nodes=8 \
+  --machine-type=n1-standard-8
+```
+
+Once the cluster has been successfully created, you'll need to install that 
cluster's credentials locally so that they can be used by 
[kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). You can do 
this in just one command:
+
+```bash
+$ gcloud container clusters get-credentials heron-gke-dev-cluster # or 
heron-gke-prod-cluster
+```
+
+Create a namespace to install into:
+
+```bash
+$ kubectl create namespace heron
+```
+
+Now you can install Heron:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=gke
+```
+
+##### Resource configurations
+
+Helm enables you to supply sets of variables via YAML files. There are 
currently a handful of different resource configurations that can be applied to 
your Heron on GKE cluster upon installation:
+
+Configuration | Description
+:-------------|:-----------
+[`small.yaml`](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/gke/small.yaml)
 | Smaller Heron cluster intended for basic testing, development, and 
experimentation
+[`medium.yaml`](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/gke/medium.yaml)
 | Closer geared for production usage
+
+To apply the `small` configuration, for example:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=gke \
+  --values 
https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/small.yaml
+```
+
+#### Amazon Web Services
+
+To run Heron on Kubernetes on Amazon Web Services (AWS), you'll need to 
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=aws
+```
+
+##### Using S3 uploader
+
+You can make Heron to use S3 to distribute the user topologies. First you need 
to set up a S3 bucket and configure an IAM user with enough permissions over 
it. Get access keys for the user. Then you can deploy Heron like this:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=aws \
+  --set uploader.class=s3 \
+  --set uploader.s3Bucket=heron \
+  --set uploader.s3PathPrefix=topologies \
+  --set uploader.s3AccessKey=XXXXXXXXXXXXXXXXXXXX \
+  --set uploader.s3SecretKey=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
+  --set uploader.s3Region=us-west-1
+```
+
+#### Bare metal
+
+To run Heron on a bare metal Kubernetes cluster:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=baremetal
+```
+
+### Managing topologies
+
+> When setting the `heron` CLI configuration, make sure that the cluster name 
matches the name of the Helm installation. This can be either the name 
auto-generated by Helm or the name you supplied via the `--name` flag upon 
installation (in some of the examples above, the `heron-kubernetes` name was 
used). Make sure to adjust the name accordingly if necessary.
+
+Once all of the components have been successfully started up, you need to open 
up a proxy port to your Kubernetes cluster using the [`kubectl 
proxy`](https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/)
 command:
+
+```bash
+$ kubectl proxy -p 8001
+```
+> Note: All of the following Kubernetes specific urls are valid with the 
Kubernetes 1.10.0 release.
+ 
+Now, verify that the Heron API server running on Minikube is available using 
curl:
+
+```bash
+$ curl 
http://localhost:8001/api/v1/namespaces/default/services/heron-kubernetes-apiserver:9000/proxy/api/v1/version
+```
+
+
+You should get a JSON response like this:
+
+```json
+{
+  "heron.build.git.revision" : "ddbb98bbf173fb082c6fd575caaa35205abe34df",
+  "heron.build.git.status" : "Clean",
+  "heron.build.host" : "ci-server-01",
+  "heron.build.time" : "Sat Mar 31 09:27:19 UTC 2018",
+  "heron.build.timestamp" : "1522488439000",
+  "heron.build.user" : "release-agent",
+  "heron.build.version" : "0.17.8"
+}
+```
+
+## Running topologies on Heron on Kubernetes
+
+Once you have a Heron cluster up and running on Kubernetes via Helm, you can 
use the [`heron` CLI tool](user-manuals-heron-cli) like normal if you set the 
proper URL for the [Heron API server](deployment-api-server). When running 
Heron on Kubernetes, that URL is:
+
+```bash
+$ 
http://localhost:8001/api/v1/namespaces/default/services/heron-kubernetes-apiserver:9000/proxy
+```
+
+To set that URL:
+
+```bash
+$ heron config heron-kubernetes set service_url \
+  
http://localhost:8001/api/v1/namespaces/default/services/heron-kubernetes-apiserver:9000/proxy
+```
+
+To test your cluster, you can submit an example topology:
+
+```bash
+$ heron submit heron-kubernetes \
+  ~/.heron/examples/heron-streamlet-examples.jar \
+  org.apache.heron.examples.streamlet.WindowedWordCountTopology \
+  WindowedWordCount
+```
diff --git a/website2/website/versions.json b/website2/website/versions.json
index 858a64d..3ad5a45 100644
--- a/website2/website/versions.json
+++ b/website2/website/versions.json
@@ -1,4 +1,5 @@
 [
+  "0.20.2-incubating-rc2",
   "0.20.2-incubating",
   "0.20.1-incubating",
   "0.20.0-incubating"

Reply via email to