azagrebin commented on a change in pull request #12131:
URL: https://github.com/apache/flink/pull/12131#discussion_r425144968



##########
File path: docs/ops/deployment/docker.md
##########
@@ -24,119 +24,476 @@ under the License.
 -->
 
 [Docker](https://www.docker.com) is a popular container runtime. 
-There are Docker images for Apache Flink available on Docker Hub which can be 
used to deploy a session cluster.
-The Flink repository also contains tooling to create container images to 
deploy a job cluster.
+There are Docker images for Apache Flink available [on Docker 
Hub](https://hub.docker.com/_/flink).
+You can use the docker images to deploy a Session or Job cluster in a 
containerised environment, e.g.
+[standalone Kubernetes](kubernetes.html) or [native 
Kubernetes](native_kubernetes.html).
 
 * This will be replaced by the TOC
 {:toc}
 
-## Flink session cluster
-
-A Flink session cluster can be used to run multiple jobs. 
-Each job needs to be submitted to the cluster after it has been deployed. 
-
-### Docker images
+## Docker Hub Flink images
 
 The [Flink Docker repository](https://hub.docker.com/_/flink/) is hosted on
 Docker Hub and serves images of Flink version 1.2.1 and later.
 
-Images for each supported combination of Hadoop and Scala are available, and 
tag aliases are provided for convenience.
+### Image tags
 
-Beginning with Flink 1.5, image tags that omit a Hadoop version (e.g.
-`-hadoop28`) correspond to Hadoop-free releases of Flink that do not include a
-bundled Hadoop distribution.
+Images for each supported combination of Flink and Scala versions are 
available, and
+[tag aliases](https://hub.docker.com/_/flink?tab=tags) are provided for 
convenience.
 
-For example, the following aliases can be used: *(`1.5.y` indicates the latest
-release of Flink 1.5)*
+For example, you can use the following aliases: *(`1.11.y` indicates the 
latest release of Flink 1.11)*
 
 * `flink:latest` → `flink:<latest-flink>-scala_<latest-scala>`
-* `flink:1.5` → `flink:1.5.y-scala_2.11`
-* `flink:1.5-hadoop27` → `flink:1.5.y-hadoop27-scala_2.11`
+* `flink:1.11` → `flink:1.11.y-scala_2.11`
+
+<span class="label label-info">Note</span> Prio to Flink 1.5 version, Hadoop 
dependencies were always bundled with Flink.
+You can see that certain tags include the version of Hadoop, e.g. (e.g. 
`-hadoop28`).
+Beginning with Flink 1.5, image tags that omit the Hadoop version correspond 
to Hadoop-free releases of Flink
+that do not include a bundled Hadoop distribution.
+
+## How to run Flink image
+
+The Flink image contains a regular Flink distribution with its default 
configuration and a standard entry point script.
+You can run its standard entry point in the following modes:
+* Flink Master for [a Session cluster](#start-a-session-cluster)
+* Flink Master for [a Single Job cluster](#start-a-single-job-cluster)
+* TaskManager for any cluster
+
+This allows to deploy a standalone cluster (Session or Single Job) in any 
containerised environment, for example:
+* manually in a local docker setup,
+* [in a Kubernetes cluster](kubernetes.html),
+* [with Docker Compose](#flink-with-docker-compose),
+* [with Docker swarm](#flink-with-docker-swarm).
+
+<span class="label label-info">Note</span> [The native 
Kubernetes](native_kubernetes.html) also runs the same image by default
+and deploys TaskManagers on demand so that you do not have to do it.
+
+The next chapters describe how to start a single Flink docker container for 
various purposes.
+
+### Start a Session Cluster
+
+A Flink Session cluster can be used to run multiple jobs. Each job needs to be 
submitted to the cluster after it has been deployed.
+To deploy a Flink Session cluster with docker, you need to start a Flink 
Master container:
+
+```sh
+docker run flink:{% if site.is_stable 
%}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif 
%} jobmanager
+```
+
+and required number of TaskManager containers:
+
+```sh
+docker run flink:{% if site.is_stable 
%}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif 
%} taskmanager
+```
+
+### Start a Single Job Cluster
+
+A Flink Job cluster is a dedicated cluster which runs a single job.
+The job artifacts should be already available locally in the container and, 
thus, there is no extra job submission needed.
+To deploy a cluster for a single job with docker, you need to
+* make job artifacts available locally *in all containers* under 
`/opt/flink/usrlib`,

Review comment:
       True, I have not found any general description of the job standalone 
cluster, independent from deployment environment. We should add it and then 
adjust docker docs to refer to it. For now, I can add e.g.:
   ```
   The *job artifacts* are included into the class path of Flink's JVM process 
within the container and consist of:
   * your job jar, which you would normally submit to a *Session cluster* and
   * all other necessary dependencies or resources, not included into Flink.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to