This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 8919177d2a4ccc50ed625c9b6f15a8b19d29c4de
Author: Till Rohrmann <[email protected]>
AuthorDate: Tue Nov 24 17:51:16 2020 +0100

    [FLINK-20342][docs] Renamed ops/deployment into ops/resource-providers
---
 docs/concepts/flink-architecture.md                  |  6 +++---
 docs/concepts/flink-architecture.zh.md               |  6 +++---
 docs/dev/batch/hadoop_compatibility.md               |  2 +-
 docs/dev/batch/hadoop_compatibility.zh.md            |  2 +-
 docs/dev/project-configuration.md                    |  2 +-
 docs/dev/project-configuration.zh.md                 |  2 +-
 docs/dev/table/connectors/hive/index.md              |  2 +-
 docs/dev/table/connectors/hive/index.zh.md           |  2 +-
 docs/dev/table/sqlClient.md                          |  2 +-
 docs/dev/table/sqlClient.zh.md                       |  2 +-
 docs/flinkDev/building.md                            |  2 +-
 docs/flinkDev/building.zh.md                         |  2 +-
 docs/index.md                                        |  2 +-
 docs/index.zh.md                                     |  2 +-
 docs/monitoring/debugging_classloading.md            |  2 +-
 docs/monitoring/debugging_classloading.zh.md         |  2 +-
 docs/ops/filesystems/index.md                        |  2 +-
 docs/ops/filesystems/index.zh.md                     |  2 +-
 docs/ops/index.md                                    | 12 ++++++------
 docs/ops/index.zh.md                                 | 12 ++++++------
 docs/ops/jobmanager_high_availability.md             |  6 +++---
 docs/ops/memory/mem_migration.md                     | 20 ++++++++++----------
 docs/ops/memory/mem_migration.zh.md                  | 16 ++++++++--------
 docs/ops/memory/mem_setup.md                         |  2 +-
 docs/ops/memory/mem_tuning.md                        |  4 ++--
 docs/ops/memory/mem_tuning.zh.md                     |  4 ++--
 docs/ops/python_shell.md                             |  2 +-
 docs/ops/python_shell.zh.md                          |  2 +-
 .../cluster_setup.md                                 |  0
 .../cluster_setup.zh.md                              |  0
 .../ops/{deployment => resource-providers}/docker.md |  0
 .../{deployment => resource-providers}/docker.zh.md  |  0
 .../ops/{deployment => resource-providers}/hadoop.md |  0
 .../{deployment => resource-providers}/hadoop.zh.md  |  0
 docs/ops/{deployment => resource-providers}/index.md |  0
 .../{deployment => resource-providers}/index.zh.md   |  0
 .../{deployment => resource-providers}/kubernetes.md |  0
 .../kubernetes.zh.md                                 |  0
 docs/ops/{deployment => resource-providers}/local.md |  0
 .../{deployment => resource-providers}/local.zh.md   |  0
 docs/ops/{deployment => resource-providers}/mesos.md |  0
 .../{deployment => resource-providers}/mesos.zh.md   |  0
 .../native_kubernetes.md                             |  6 +++---
 .../native_kubernetes.zh.md                          |  4 ++--
 .../{deployment => resource-providers}/yarn_setup.md |  0
 .../yarn_setup.zh.md                                 |  0
 docs/ops/upgrading.md                                |  2 +-
 docs/ops/upgrading.zh.md                             |  2 +-
 docs/redirects/aws.md                                |  2 +-
 docs/redirects/gce_setup.md                          |  4 ++--
 docs/redirects/local_setup_tutorial.md               |  2 +-
 docs/redirects/mapr.md                               |  4 ++--
 docs/redirects/oss.md                                |  2 +-
 docs/redirects/setup_quickstart.md                   |  2 +-
 docs/redirects/windows_local_setup.md                |  2 +-
 docs/release-notes/flink-1.11.md                     | 12 ++++++------
 docs/release-notes/flink-1.11.zh.md                  | 12 ++++++------
 57 files changed, 90 insertions(+), 90 deletions(-)

diff --git a/docs/concepts/flink-architecture.md 
b/docs/concepts/flink-architecture.md
index 604dfba..09946dd 100644
--- a/docs/concepts/flink-architecture.md
+++ b/docs/concepts/flink-architecture.md
@@ -53,9 +53,9 @@ that triggers the execution, or in the command line process 
`./bin/flink run
 
 The JobManager and TaskManagers can be started in various ways: directly on
 the machines as a [standalone cluster]({% link
-ops/deployment/cluster_setup.md %}), in containers, or managed by resource
-frameworks like [YARN]({% link ops/deployment/yarn_setup.md
-%}) or [Mesos]({% link ops/deployment/mesos.md %}).
+ops/resource-providers/cluster_setup.md %}), in containers, or managed by 
resource
+frameworks like [YARN]({% link ops/resource-providers/yarn_setup.md
+%}) or [Mesos]({% link ops/resource-providers/mesos.md %}).
 TaskManagers connect to JobManagers, announcing themselves as available, and
 are assigned work.
 
diff --git a/docs/concepts/flink-architecture.zh.md 
b/docs/concepts/flink-architecture.zh.md
index b63fa78..6fe7dd1 100644
--- a/docs/concepts/flink-architecture.zh.md
+++ b/docs/concepts/flink-architecture.zh.md
@@ -53,9 +53,9 @@ that triggers the execution, or in the command line process 
`./bin/flink run
 
 The JobManager and TaskManagers can be started in various ways: directly on
 the machines as a [standalone cluster]({% link
-ops/deployment/cluster_setup.zh.md %}), in containers, or managed by resource
-frameworks like [YARN]({% link ops/deployment/yarn_setup.zh.md
-%}) or [Mesos]({% link ops/deployment/mesos.zh.md %}).
+ops/resource-providers/cluster_setup.zh.md %}), in containers, or managed by 
resource
+frameworks like [YARN]({% link ops/resource-providers/yarn_setup.zh.md
+%}) or [Mesos]({% link ops/resource-providers/mesos.zh.md %}).
 TaskManagers connect to JobManagers, announcing themselves as available, and
 are assigned work.
 
diff --git a/docs/dev/batch/hadoop_compatibility.md 
b/docs/dev/batch/hadoop_compatibility.md
index c639ed7..4e77661 100644
--- a/docs/dev/batch/hadoop_compatibility.md
+++ b/docs/dev/batch/hadoop_compatibility.md
@@ -64,7 +64,7 @@ and Reducers.
 </dependency>
 {% endhighlight %}
 
-See also **[how to configure hadoop dependencies]({{ site.baseurl 
}}/ops/deployment/hadoop.html#add-hadoop-classpaths)**.
+See also **[how to configure hadoop dependencies]({{ site.baseurl 
}}/ops/resource-providers/hadoop.html#add-hadoop-classpaths)**.
 
 ### Using Hadoop InputFormats
 
diff --git a/docs/dev/batch/hadoop_compatibility.zh.md 
b/docs/dev/batch/hadoop_compatibility.zh.md
index 52b3a51..097269d 100644
--- a/docs/dev/batch/hadoop_compatibility.zh.md
+++ b/docs/dev/batch/hadoop_compatibility.zh.md
@@ -64,7 +64,7 @@ and Reducers.
 </dependency>
 {% endhighlight %}
 
-See also **[how to configure hadoop dependencies]({{ site.baseurl 
}}/ops/deployment/hadoop.html#add-hadoop-classpaths)**.
+See also **[how to configure hadoop dependencies]({{ site.baseurl 
}}/ops/resource-providers/hadoop.html#add-hadoop-classpaths)**.
 
 ### Using Hadoop InputFormats
 
diff --git a/docs/dev/project-configuration.md 
b/docs/dev/project-configuration.md
index 4aebbcf..30d9d76 100644
--- a/docs/dev/project-configuration.md
+++ b/docs/dev/project-configuration.md
@@ -152,7 +152,7 @@ for details on how to build Flink for a specific Scala 
version.
 *(The only exception being when using existing Hadoop input-/output formats 
with Flink's Hadoop compatibility wrappers)*
 
 If you want to use Flink with Hadoop, you need to have a Flink setup that 
includes the Hadoop dependencies, rather than
-adding Hadoop as an application dependency. Please refer to the [Hadoop Setup 
Guide]({{ site.baseurl }}/ops/deployment/hadoop.html)
+adding Hadoop as an application dependency. Please refer to the [Hadoop Setup 
Guide]({{ site.baseurl }}/ops/resource-providers/hadoop.html)
 for details.
 
 There are two main reasons for that design:
diff --git a/docs/dev/project-configuration.zh.md 
b/docs/dev/project-configuration.zh.md
index 4aebbcf..30d9d76 100644
--- a/docs/dev/project-configuration.zh.md
+++ b/docs/dev/project-configuration.zh.md
@@ -152,7 +152,7 @@ for details on how to build Flink for a specific Scala 
version.
 *(The only exception being when using existing Hadoop input-/output formats 
with Flink's Hadoop compatibility wrappers)*
 
 If you want to use Flink with Hadoop, you need to have a Flink setup that 
includes the Hadoop dependencies, rather than
-adding Hadoop as an application dependency. Please refer to the [Hadoop Setup 
Guide]({{ site.baseurl }}/ops/deployment/hadoop.html)
+adding Hadoop as an application dependency. Please refer to the [Hadoop Setup 
Guide]({{ site.baseurl }}/ops/resource-providers/hadoop.html)
 for details.
 
 There are two main reasons for that design:
diff --git a/docs/dev/table/connectors/hive/index.md 
b/docs/dev/table/connectors/hive/index.md
index c1e8931..c64aff4 100644
--- a/docs/dev/table/connectors/hive/index.md
+++ b/docs/dev/table/connectors/hive/index.md
@@ -93,7 +93,7 @@ Alternatively, you can put these dependencies in a dedicated 
folder, and add the
 or `-l` option for Table API program or SQL Client respectively.
 
 Apache Hive is built on Hadoop, so you need Hadoop dependency first, please 
refer to
-[Providing Hadoop classes]({{ site.baseurl 
}}/ops/deployment/hadoop.html#providing-hadoop-classes).
+[Providing Hadoop classes]({{ site.baseurl 
}}/ops/resource-providers/hadoop.html#providing-hadoop-classes).
 
 There are two ways to add Hive dependencies. First is to use Flink's bundled 
Hive jars. You can choose a bundled Hive jar according to the version of the 
metastore you use. Second is to add each of the required jars separately. The 
second way can be useful if the Hive version you're using is not listed here.
 
diff --git a/docs/dev/table/connectors/hive/index.zh.md 
b/docs/dev/table/connectors/hive/index.zh.md
index 0509dc2..71c5d77 100644
--- a/docs/dev/table/connectors/hive/index.zh.md
+++ b/docs/dev/table/connectors/hive/index.zh.md
@@ -92,7 +92,7 @@ Flink 支持一下的 Hive 版本。
 或者,您可以将这些依赖项放在专用文件夹中,并分别使用 Table API 程序或 SQL Client 的`-C`或`-l`选项将它们添加到 
classpath 中。
 
 Apache Hive 是基于 Hadoop 之上构建的, 首先您需要 Hadoop 的依赖,请参考
-[Providing Hadoop classes]({{ site.baseurl 
}}/zh/ops/deployment/hadoop.html#providing-hadoop-classes).
+[Providing Hadoop classes]({{ site.baseurl 
}}/zh/ops/resource-providers/hadoop.html#providing-hadoop-classes).
 
 有两种添加 Hive 依赖项的方法。第一种是使用 Flink 提供的 Hive Jar包。您可以根据使用的 Metastore 的版本来选择对应的 Hive 
jar。第二个方式是分别添加每个所需的 jar 包。如果您使用的 Hive 版本尚未在此处列出,则第二种方法会更适合。
 
diff --git a/docs/dev/table/sqlClient.md b/docs/dev/table/sqlClient.md
index 1634513..7bd9fb8 100644
--- a/docs/dev/table/sqlClient.md
+++ b/docs/dev/table/sqlClient.md
@@ -37,7 +37,7 @@ Getting Started
 
 This section describes how to setup and run your first Flink SQL program from 
the command-line.
 
-The SQL Client is bundled in the regular Flink distribution and thus runnable 
out-of-the-box. It requires only a running Flink cluster where table programs 
can be executed. For more information about setting up a Flink cluster see the 
[Cluster & Deployment]({{ site.baseurl }}/ops/deployment/cluster_setup.html) 
part. If you simply want to try out the SQL Client, you can also start a local 
cluster with one worker using the following command:
+The SQL Client is bundled in the regular Flink distribution and thus runnable 
out-of-the-box. It requires only a running Flink cluster where table programs 
can be executed. For more information about setting up a Flink cluster see the 
[Cluster & Deployment]({{ site.baseurl 
}}/ops/resource-providers/cluster_setup.html) part. If you simply want to try 
out the SQL Client, you can also start a local cluster with one worker using 
the following command:
 
 {% highlight bash %}
 ./bin/start-cluster.sh
diff --git a/docs/dev/table/sqlClient.zh.md b/docs/dev/table/sqlClient.zh.md
index a357972..e107924 100644
--- a/docs/dev/table/sqlClient.zh.md
+++ b/docs/dev/table/sqlClient.zh.md
@@ -36,7 +36,7 @@ Flink 的 Table & SQL API 可以处理 SQL 语言编写的查询语句,但是
 
 本节介绍如何在命令行里启动(setup)和运行你的第一个 Flink SQL 程序。
 
-SQL 客户端捆绑在常规 Flink 发行版中,因此可以直接运行。它仅需要一个正在运行的 Flink 集群就可以在其中执行表程序。有关设置 Flink 
群集的更多信息,请参见[集群和部署]({{ site.baseurl 
}}/zh/ops/deployment/cluster_setup.html)部分。如果仅想试用 SQL 客户端,也可以使用以下命令启动本地集群:
+SQL 客户端捆绑在常规 Flink 发行版中,因此可以直接运行。它仅需要一个正在运行的 Flink 集群就可以在其中执行表程序。有关设置 Flink 
群集的更多信息,请参见[集群和部署]({{ site.baseurl 
}}/zh/ops/resource-providers/cluster_setup.html)部分。如果仅想试用 SQL 
客户端,也可以使用以下命令启动本地集群:
 
 {% highlight bash %}
 ./bin/start-cluster.sh
diff --git a/docs/flinkDev/building.md b/docs/flinkDev/building.md
index ea66302..1113ae9 100644
--- a/docs/flinkDev/building.md
+++ b/docs/flinkDev/building.md
@@ -123,7 +123,7 @@ mvn clean install
 
 ## Hadoop Versions
 
-Please see the [Hadoop integration section]({% link ops/deployment/hadoop.md 
%}) on how to handle Hadoop classes and versions.
+Please see the [Hadoop integration section]({% link 
ops/resource-providers/hadoop.md %}) on how to handle Hadoop classes and 
versions.
 
 ## Scala Versions
 
diff --git a/docs/flinkDev/building.zh.md b/docs/flinkDev/building.zh.md
index 0e3e716..7fc6859 100644
--- a/docs/flinkDev/building.zh.md
+++ b/docs/flinkDev/building.zh.md
@@ -127,7 +127,7 @@ mvn clean install
 
 ## Hadoop 版本
 
-请查看 [Hadoop 集成模块]({% link ops/deployment/hadoop.zh.md %}) 一节中关于处理 Hadoop 
的类和版本问题的方法。
+请查看 [Hadoop 集成模块]({% link ops/resource-providers/hadoop.zh.md %}) 一节中关于处理 
Hadoop 的类和版本问题的方法。
 
 ## Scala 版本
 
diff --git a/docs/index.md b/docs/index.md
index a627de5..7485a4d 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -76,7 +76,7 @@ The reference documentation covers all the details. Some 
starting points:
 
 ### Deploy Flink
 
-Before putting your Flink job into production, read the [Production Readiness 
Checklist]({% link ops/production_ready.md %}). For an overview of possible 
deployment targets, see [Clusters and Deployments]({% link 
ops/deployment/index.md %}). 
+Before putting your Flink job into production, read the [Production Readiness 
Checklist]({% link ops/production_ready.md %}). For an overview of possible 
deployment targets, see [Clusters and Deployments]({% link 
ops/resource-providers/index.md %}). 
 
 ### Upgrade Flink
 
diff --git a/docs/index.zh.md b/docs/index.zh.md
index 965ec8b..4a8cbc8 100644
--- a/docs/index.zh.md
+++ b/docs/index.zh.md
@@ -76,7 +76,7 @@ Apache Flink 是一个在无界和有界数据流上进行状态计算的框架
 
 ### 部署 Flink
 
-在线上环境运行你的 Flink 作业之前,请阅读 [生产环境注意事项检查清单]({% link ops/production_ready.zh.md 
%}). 各种部署环境一览,详见 [集群与部署]({% link ops/deployment/index.zh.md %}). 
+在线上环境运行你的 Flink 作业之前,请阅读 [生产环境注意事项检查清单]({% link ops/production_ready.zh.md 
%}). 各种部署环境一览,详见 [集群与部署]({% link ops/resource-providers/index.zh.md %}). 
 
 ### 升级 Flink
 
diff --git a/docs/monitoring/debugging_classloading.md 
b/docs/monitoring/debugging_classloading.md
index 8a33792..ca78cd9 100644
--- a/docs/monitoring/debugging_classloading.md
+++ b/docs/monitoring/debugging_classloading.md
@@ -78,7 +78,7 @@ YARN classloading differs between single job deployments and 
sessions:
 
 **Mesos**
 
-Mesos setups following [this documentation]({% link ops/deployment/mesos.md 
%}) currently behave very much like the a
+Mesos setups following [this documentation]({% link 
ops/resource-providers/mesos.md %}) currently behave very much like the a
 YARN session: The TaskManager and JobManager processes are started with the 
Flink framework classes in the Java classpath, job
 classes are loaded dynamically when the jobs are submitted.
 
diff --git a/docs/monitoring/debugging_classloading.zh.md 
b/docs/monitoring/debugging_classloading.zh.md
index b7e6700..6db9e34 100644
--- a/docs/monitoring/debugging_classloading.zh.md
+++ b/docs/monitoring/debugging_classloading.zh.md
@@ -78,7 +78,7 @@ YARN classloading differs between single job deployments and 
sessions:
 
 **Mesos**
 
-Mesos setups following [this documentation]({% link ops/deployment/mesos.zh.md 
%}) currently behave very much like the a
+Mesos setups following [this documentation]({% link 
ops/resource-providers/mesos.zh.md %}) currently behave very much like the a
 YARN session: The TaskManager and JobManager processes are started with the 
Flink framework classes in the Java classpath, job
 classes are loaded dynamically when the jobs are submitted.
 
diff --git a/docs/ops/filesystems/index.md b/docs/ops/filesystems/index.md
index b9b260f..cef2f8d 100644
--- a/docs/ops/filesystems/index.md
+++ b/docs/ops/filesystems/index.md
@@ -100,7 +100,7 @@ in your implementation.
 
 For all schemes where Flink cannot find a directly supported file system, it 
falls back to Hadoop.
 All Hadoop file systems are automatically available when `flink-runtime` and 
the Hadoop libraries are on the classpath.
-See also **[Hadoop Integration]({% link ops/deployment/hadoop.md %})**.
+See also **[Hadoop Integration]({% link ops/resource-providers/hadoop.md %})**.
 
 This way, Flink seamlessly supports all of Hadoop file systems implementing 
the `org.apache.hadoop.fs.FileSystem` interface,
 and all Hadoop-compatible file systems (HCFS).
diff --git a/docs/ops/filesystems/index.zh.md b/docs/ops/filesystems/index.zh.md
index a488201..c95c96b 100644
--- a/docs/ops/filesystems/index.zh.md
+++ b/docs/ops/filesystems/index.zh.md
@@ -82,7 +82,7 @@ cp ./opt/flink-s3-fs-hadoop-{{ site.version }}.jar 
./plugins/s3-fs-hadoop/
 ## Hadoop 文件系统 (HDFS) 及其其他实现
 
 所有 Flink 无法找到直接支持的文件系统均将回退为 Hadoop。
-当 `flink-runtime` 和 Hadoop 类包含在 classpath 中时,所有的 Hadoop 文件系统将自动可用。参见 **[Hadoop 
集成]({% link ops/deployment/hadoop.zh.md %})**。
+当 `flink-runtime` 和 Hadoop 类包含在 classpath 中时,所有的 Hadoop 文件系统将自动可用。参见 **[Hadoop 
集成]({% link ops/resource-providers/hadoop.zh.md %})**。
 
 因此,Flink 无缝支持所有实现 `org.apache.hadoop.fs.FileSystem` 接口的 Hadoop 文件系统和所有兼容 
Hadoop 的文件系统 (Hadoop-compatible file system, HCFS):
   - HDFS (已测试)
diff --git a/docs/ops/index.md b/docs/ops/index.md
index e7a6b88..dfe07a4 100644
--- a/docs/ops/index.md
+++ b/docs/ops/index.md
@@ -120,7 +120,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         Run Flink locally for basic testing and experimentation
-        <br><a href="{% link ops/deployment/local.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/local.md %}">Learn more</a>
       </div>
     </div>
   </div>
@@ -131,7 +131,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         A simple solution for running Flink on bare metal or VM's 
-        <br><a href="{% link ops/deployment/cluster_setup.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/cluster_setup.md %}">Learn 
more</a>
       </div>
     </div>
   </div>
@@ -142,7 +142,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         Deploy Flink on-top of Apache Hadoop's resource manager 
-        <br><a href="{% link ops/deployment/yarn_setup.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/yarn_setup.md %}">Learn 
more</a>
       </div>
     </div>
   </div>
@@ -155,7 +155,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         A generic resource manager for running distriubted systems
-        <br><a href="{% link ops/deployment/mesos.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/mesos.md %}">Learn more</a>
       </div>
     </div>
   </div>
@@ -166,7 +166,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         A popular solution for running Flink within a containerized environment
-        <br><a href="{% link ops/deployment/docker.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/docker.md %}">Learn 
more</a>
       </div>
     </div>
   </div>
@@ -177,7 +177,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         An automated system for deploying containerized applications
-        <br><a href="{% link ops/deployment/kubernetes.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/kubernetes.md %}">Learn 
more</a>
       </div>
     </div>
   </div>
diff --git a/docs/ops/index.zh.md b/docs/ops/index.zh.md
index 86c6c3d..88496ef 100644
--- a/docs/ops/index.zh.md
+++ b/docs/ops/index.zh.md
@@ -120,7 +120,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         Run Flink locally for basic testing and experimentation
-        <br><a href="{% link ops/deployment/local.zh.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/local.zh.md %}">Learn 
more</a>
       </div>
     </div>
   </div>
@@ -131,7 +131,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         A simple solution for running Flink on bare metal or VM's 
-        <br><a href="{% link ops/deployment/cluster_setup.zh.md %}">Learn 
more</a>
+        <br><a href="{% link ops/resource-providers/cluster_setup.zh.md 
%}">Learn more</a>
       </div>
     </div>
   </div>
@@ -142,7 +142,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         Deploy Flink on-top of Apache Hadoop's resource manager 
-        <br><a href="{% link ops/deployment/yarn_setup.zh.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/yarn_setup.zh.md %}">Learn 
more</a>
       </div>
     </div>
   </div>
@@ -155,7 +155,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         A generic resource manager for running distriubted systems
-        <br><a href="{% link ops/deployment/mesos.zh.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/mesos.zh.md %}">Learn 
more</a>
       </div>
     </div>
   </div>
@@ -166,7 +166,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         A popular solution for running Flink within a containerized environment
-        <br><a href="{% link ops/deployment/docker.zh.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/docker.zh.md %}">Learn 
more</a>
       </div>
     </div>
   </div>
@@ -177,7 +177,7 @@ Apache Flink ships with first class support for a number of 
common deployment ta
       </div>
       <div class="panel-body">
         An automated system for deploying containerized applications
-        <br><a href="{% link ops/deployment/kubernetes.zh.md %}">Learn more</a>
+        <br><a href="{% link ops/resource-providers/kubernetes.zh.md %}">Learn 
more</a>
       </div>
     </div>
   </div>
diff --git a/docs/ops/jobmanager_high_availability.md 
b/docs/ops/jobmanager_high_availability.md
index 96684b3..e331d80 100644
--- a/docs/ops/jobmanager_high_availability.md
+++ b/docs/ops/jobmanager_high_availability.md
@@ -216,7 +216,7 @@ Starting zookeeper daemon on host localhost.</pre>
 $ bin/yarn-session.sh -n 2</pre>
 
 ## Kubernetes Cluster High Availability
-Kubernetes high availability service could support both [standalone Flink on 
Kubernetes]({{ site.baseurl}}/ops/deployment/kubernetes.html) and [native 
Kubernetes integration]({{ 
site.baseurl}}/ops/deployment/native_kubernetes.html).
+Kubernetes high availability service could support both [standalone Flink on 
Kubernetes]({{ site.baseurl}}/ops/resource-providers/kubernetes.html) and 
[native Kubernetes integration]({{ 
site.baseurl}}/ops/resource-providers/native_kubernetes.html).
 
 When running Flink JobManager as a Kubernetes deployment, the replica count 
should be configured to 1 or greater.
 * The value `1` means that a new JobManager will be launched to take over 
leadership if the current one terminates exceptionally.
@@ -230,9 +230,9 @@ high-availability.storageDir: hdfs:///flink/recovery
 {% endhighlight %}
 
 #### Example: Highly Available Standalone Flink Cluster on Kubernetes
-Both session and job/application clusters support using the Kubernetes high 
availability service. Users just need to add the following Flink config options 
to [flink-configuration-configmap.yaml]({{ 
site.baseurl}}/ops/deployment/kubernetes.html#common-cluster-resource-definitions).
 All other yamls do not need to be updated.
+Both session and job/application clusters support using the Kubernetes high 
availability service. Users just need to add the following Flink config options 
to [flink-configuration-configmap.yaml]({{ 
site.baseurl}}/ops/resource-providers/kubernetes.html#common-cluster-resource-definitions).
 All other yamls do not need to be updated.
 
-<span class="label label-info">Note</span> The filesystem which corresponds to 
the scheme of your configured HA storage directory must be available to the 
runtime. Refer to [custom Flink image]({{ 
site.baseurl}}/ops/deployment/docker.html#customize-flink-image) and [enable 
plugins]({{ site.baseurl}}/ops/deployment/docker.html#using-plugins) for more 
information.
+<span class="label label-info">Note</span> The filesystem which corresponds to 
the scheme of your configured HA storage directory must be available to the 
runtime. Refer to [custom Flink image]({{ 
site.baseurl}}/ops/resource-providers/docker.html#customize-flink-image) and 
[enable plugins]({{ 
site.baseurl}}/ops/resource-providers/docker.html#using-plugins) for more 
information.
 
 {% highlight yaml %}
 apiVersion: v1
diff --git a/docs/ops/memory/mem_migration.md b/docs/ops/memory/mem_migration.md
index 3fb650c..168e813 100644
--- a/docs/ops/memory/mem_migration.md
+++ b/docs/ops/memory/mem_migration.md
@@ -109,7 +109,7 @@ The following options are deprecated but if they are still 
used they will be int
             <td><h5>taskmanager.heap.size</h5></td>
             <td>
                 <ul>
-                  <li><a href="{% link ops/config.md 
%}#taskmanager-memory-flink-size">taskmanager.memory.flink.size</a> for <a 
href="{% link ops/deployment/cluster_setup.md %}">standalone deployment</a></li>
+                  <li><a href="{% link ops/config.md 
%}#taskmanager-memory-flink-size">taskmanager.memory.flink.size</a> for <a 
href="{% link ops/resource-providers/cluster_setup.md %}">standalone 
deployment</a></li>
                   <li><a href="{% link ops/config.md 
%}#taskmanager-memory-process-size">taskmanager.memory.process.size</a> for 
containerized deployments</li>
                 </ul>
                 See also <a href="#total-memory-previously-heap-memory">how to 
migrate total memory</a>.
@@ -189,8 +189,8 @@ It is recommended to use the new option because the legacy 
one can be removed in
 #### Fraction
 
 If not set explicitly, the managed memory could be previously specified as a 
fraction (`taskmanager.memory.fraction`)
-of the total memory minus network memory and container cut-off (only for 
[Yarn]({% link ops/deployment/yarn_setup.md %}) and
-[Mesos]({% link ops/deployment/mesos.md %}) deployments). This option has been 
completely removed and will have no effect if still used.
+of the total memory minus network memory and container cut-off (only for 
[Yarn]({% link ops/resource-providers/yarn_setup.md %}) and
+[Mesos]({% link ops/resource-providers/mesos.md %}) deployments). This option 
has been completely removed and will have no effect if still used.
 Please, use the new option [`taskmanager.memory.managed.fraction`]({% link 
ops/config.md %}#taskmanager-memory-managed-fraction) instead.
 This new option will set the [managed memory]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory) to the specified fraction of the
 [total Flink memory]({% link ops/memory/mem_setup.md 
%}#configure-total-memory) if its size is not set explicitly by
@@ -201,7 +201,7 @@ This new option will set the [managed memory]({% link 
ops/memory/mem_setup_tm.md
 If the [RocksDBStateBackend]({% link ops/state/state_backends.md 
%}#the-rocksdbstatebackend) is chosen for a streaming job,
 its native memory consumption should now be accounted for in [managed 
memory]({% link ops/memory/mem_setup_tm.md %}#managed-memory).
 The RocksDB memory allocation is limited by the [managed memory]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory) size.
-This should prevent the killing of containers on [Yarn]({% link 
ops/deployment/yarn_setup.md %}) and [Mesos]({% link ops/deployment/mesos.md 
%}).
+This should prevent the killing of containers on [Yarn]({% link 
ops/resource-providers/yarn_setup.md %}) and [Mesos]({% link 
ops/resource-providers/mesos.md %}).
 You can disable the RocksDB memory control by setting 
[state.backend.rocksdb.memory.managed]({% link ops/config.md 
%}#state-backend-rocksdb-memory-managed)
 to `false`. See also [how to migrate container 
cut-off](#container-cut-off-memory).
 
@@ -218,19 +218,19 @@ Previously, there were options responsible for setting 
the *JVM Heap* size of th
 * `jobmanager.heap.size`
 * `jobmanager.heap.mb`
 
-Despite their naming, they represented the *JVM Heap* only for [standalone 
deployments]({% link ops/deployment/cluster_setup.md %}).
-For the containerized deployments ([Kubernetes]({% link 
ops/deployment/kubernetes.md %}) and [Yarn]({% link 
ops/deployment/yarn_setup.md %})),
+Despite their naming, they represented the *JVM Heap* only for [standalone 
deployments]({% link ops/resource-providers/cluster_setup.md %}).
+For the containerized deployments ([Kubernetes]({% link 
ops/resource-providers/kubernetes.md %}) and [Yarn]({% link 
ops/resource-providers/yarn_setup.md %})),
 they also included other off-heap memory consumption. The size of *JVM Heap* 
was additionally reduced by the container
 cut-off which has been completely removed after *1.11*.
 
-The [Mesos]({% link ops/deployment/mesos.md %}) integration did not take into 
account the mentioned legacy memory options.
+The [Mesos]({% link ops/resource-providers/mesos.md %}) integration did not 
take into account the mentioned legacy memory options.
 The scripts provided in Flink to start the Mesos JobManager process did not 
set any memory JVM arguments. After the *1.11* release,
-they are set the same way as it is done by the [standalone deployment]({% link 
ops/deployment/cluster_setup.md %}) scripts.
+they are set the same way as it is done by the [standalone deployment]({% link 
ops/resource-providers/cluster_setup.md %}) scripts.
 
 The mentioned legacy options have been deprecated. If they are used without 
specifying the corresponding new options,
 they will be directly translated into the following new options:
-* JVM Heap ([`jobmanager.memory.heap.size`]({% link ops/config.md 
%}#jobmanager-memory-heap-size)) for [standalone]({% link 
ops/deployment/cluster_setup.md %}) and [Mesos]({% link ops/deployment/mesos.md 
%}) deployments
-* Total process memory ([`jobmanager.memory.process.size`]({% link 
ops/config.md %}#jobmanager-memory-process-size)) for containerized deployments 
([Kubernetes]({% link ops/deployment/kubernetes.md %}) and [Yarn]({% link 
ops/deployment/yarn_setup.md %}))
+* JVM Heap ([`jobmanager.memory.heap.size`]({% link ops/config.md 
%}#jobmanager-memory-heap-size)) for [standalone]({% link 
ops/resource-providers/cluster_setup.md %}) and [Mesos]({% link 
ops/resource-providers/mesos.md %}) deployments
+* Total process memory ([`jobmanager.memory.process.size`]({% link 
ops/config.md %}#jobmanager-memory-process-size)) for containerized deployments 
([Kubernetes]({% link ops/resource-providers/kubernetes.md %}) and [Yarn]({% 
link ops/resource-providers/yarn_setup.md %}))
 
 It is also recommended using these new options instead of the legacy ones as 
they might be completely removed in the following releases.
 
diff --git a/docs/ops/memory/mem_migration.zh.md 
b/docs/ops/memory/mem_migration.zh.md
index 02d8e33..41ee311 100644
--- a/docs/ops/memory/mem_migration.zh.md
+++ b/docs/ops/memory/mem_migration.zh.md
@@ -105,7 +105,7 @@ Flink 自带的[默认 
flink-conf.yaml](#default-configuration-in-flink-confyaml
             <td><h5>taskmanager.heap.size</h5></td>
             <td>
                 <ul>
-                  <li><a href="{%link ops/deployment/cluster_setup.zh.md 
%}">独立部署模式(Standalone Deployment)</a>下:<a href="{%link ops/config.zh.md 
%}#taskmanager-memory-flink-size">taskmanager.memory.flink.size</a></li>
+                  <li><a href="{%link 
ops/resource-providers/cluster_setup.zh.md %}">独立部署模式(Standalone 
Deployment)</a>下:<a href="{%link ops/config.zh.md 
%}#taskmanager-memory-flink-size">taskmanager.memory.flink.size</a></li>
                   <li>容器化部署模式(Containerized Deployement)下:<a href="{%link 
ops/config.zh.md 
%}#taskmanager-memory-process-size">taskmanager.memory.process.size</a></li>
                 </ul>
                 请参考<a href="#total-memory-previously-heap-memory">如何升级总内存</a>。
@@ -190,8 +190,8 @@ Flink 现在总是会预留一部分 JVM 堆内存供框架使用([`taskmanage
 
 #### 占比
 
-此前,如果不指定明确的大小,也可以将托管内存配置为占用总内存减去网络内存和容器切除内存(仅在 [Yarn]({% link 
ops/deployment/yarn_setup.zh.md %}) 和
-[Mesos]({% link ops/deployment/mesos.zh.md %}) 
上)之后剩余部分的固定比例(`taskmanager.memory.fraction`)。
+此前,如果不指定明确的大小,也可以将托管内存配置为占用总内存减去网络内存和容器切除内存(仅在 [Yarn]({% link 
ops/resource-providers/yarn_setup.zh.md %}) 和
+[Mesos]({% link ops/resource-providers/mesos.zh.md %}) 
上)之后剩余部分的固定比例(`taskmanager.memory.fraction`)。
 该配置参数已经被彻底移除,配置它不会产生任何效果。
 请使用新的配置参数 [`taskmanager.memory.managed.fraction`]({% link ops/config.zh.md 
%}#taskmanager-memory-managed-fraction)。
 在未通过 [`taskmanager.memory.managed.size`]({% link ops/config.zh.md 
%}#taskmanager-memory-managed-size) 指定明确大小的情况下,新的配置参数将指定[托管内存]({% link 
ops/memory/mem_setup_tm.zh.md %}#managed-memory)在 [Flink 总内存]({% link 
ops/memory/mem_setup.zh.md %}#configure-total-memory)中的所占比例。
@@ -201,7 +201,7 @@ Flink 现在总是会预留一部分 JVM 堆内存供框架使用([`taskmanage
 #### RocksDB State Backend
 
 流处理作业如果选择使用 [RocksDBStateBackend]({% link ops/state/state_backends.zh.md 
%}#rocksdbstatebackend),它使用的本地内存现在也被归为[托管内存]({% link 
ops/memory/mem_setup_tm.zh.md %}#managed-memory)。
-默认情况下,RocksDB 将限制其内存用量不超过[托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)大小,以避免在 [Yarn]({% link ops/deployment/yarn_setup.zh.md %}) 或 
[Mesos]({% link ops/deployment/mesos.zh.md %}) 上容器被杀。你也可以通过设置 
[state.backend.rocksdb.memory.managed]({% link ops/config.zh.md 
%}#state-backend-rocksdb-memory-managed) 来关闭 RocksDB 的内存控制。
+默认情况下,RocksDB 将限制其内存用量不超过[托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)大小,以避免在 [Yarn]({% link 
ops/resource-providers/yarn_setup.zh.md %}) 或 [Mesos]({% link 
ops/resource-providers/mesos.zh.md %}) 上容器被杀。你也可以通过设置 
[state.backend.rocksdb.memory.managed]({% link ops/config.zh.md 
%}#state-backend-rocksdb-memory-managed) 来关闭 RocksDB 的内存控制。
 请参考[如何升级容器切除内存](#container-cut-off-memory)。
 
 <a name="other-changes" />
@@ -221,14 +221,14 @@ Flink 现在总是会预留一部分 JVM 堆内存供框架使用([`taskmanage
 * `jobmanager.heap.size`
 * `jobmanager.heap.mb`
 
-尽管这两个参数以“堆(Heap)”命名,在此之前它们实际上只有在[独立部署模式]({% link 
ops/deployment/cluster_setup.zh.md %})才完全对应于 *JVM 堆内存*。
-在容器化部署模式下([Kubernetes]({% link ops/deployment/kubernetes.zh.md %}) 和 [Yarn]({% 
link ops/deployment/yarn_setup.zh.md %})),它们指定的内存还包含了其他堆外内存部分。
+尽管这两个参数以“堆(Heap)”命名,在此之前它们实际上只有在[独立部署模式]({% link 
ops/resource-providers/cluster_setup.zh.md %})才完全对应于 *JVM 堆内存*。
+在容器化部署模式下([Kubernetes]({% link ops/resource-providers/kubernetes.zh.md %}) 和 
[Yarn]({% link ops/resource-providers/yarn_setup.zh.md %})),它们指定的内存还包含了其他堆外内存部分。
 *JVM 堆空间*的实际大小,是参数指定的大小减去容器切除(Cut-Off)内存后剩余的部分。
 容器切除内存在 *1.11* 及以上版本中已被彻底移除。
 
-上述两个参数此前对 [Mesos]({% link ops/deployment/mesos.zh.md %}) 部署模式并不生效。
+上述两个参数此前对 [Mesos]({% link ops/resource-providers/mesos.zh.md %}) 部署模式并不生效。
 Flink 在 Mesos 上启动 JobManager 进程时并未设置任何 JVM 内存参数。
-从 *1.11* 版本开始,Flink 将采用与[独立部署模式]({% link ops/deployment/cluster_setup.zh.md 
%})相同的方式设置这些参数。
+从 *1.11* 版本开始,Flink 将采用与[独立部署模式]({% link 
ops/resource-providers/cluster_setup.zh.md %})相同的方式设置这些参数。
 
 这两个配置参数目前已被弃用。
 如果配置了上述弃用的参数,同时又没有配置与之对应的新配置参数,那它们将按如下规则对应到新的配置参数。
diff --git a/docs/ops/memory/mem_setup.md b/docs/ops/memory/mem_setup.md
index f480b05..edb9413 100644
--- a/docs/ops/memory/mem_setup.md
+++ b/docs/ops/memory/mem_setup.md
@@ -59,7 +59,7 @@ The simplest way to setup memory in Flink is to configure 
either of the two foll
 The rest of the memory components will be adjusted automatically, based on 
default values or additionally configured options.
 See also how to set up other components for [TaskManager]({% link 
ops/memory/mem_setup_tm.md %}) and [JobManager]({% link 
ops/memory/mem_setup_jobmanager.md %}) memory.
 
-Configuring *total Flink memory* is better suited for [standalone 
deployments]({% link ops/deployment/cluster_setup.md %})
+Configuring *total Flink memory* is better suited for [standalone 
deployments]({% link ops/resource-providers/cluster_setup.md %})
 where you want to declare how much memory is given to Flink itself. The *total 
Flink memory* splits up into *JVM Heap*
 and *Off-heap* memory.
 See also [how to configure memory for standalone deployments]({% link 
ops/memory/mem_tuning.md %}#configure-memory-for-standalone-deployment).
diff --git a/docs/ops/memory/mem_tuning.md b/docs/ops/memory/mem_tuning.md
index 1a8a3c3..3d75bcd 100644
--- a/docs/ops/memory/mem_tuning.md
+++ b/docs/ops/memory/mem_tuning.md
@@ -32,7 +32,7 @@ depending on the use case and which options are important for 
each case.
 
 It is recommended to configure [total Flink memory]({% link 
ops/memory/mem_setup.md %}#configure-total-memory)
 ([`taskmanager.memory.flink.size`]({% link ops/config.md 
%}#taskmanager-memory-flink-size) or [`jobmanager.memory.flink.size`]({% link 
ops/config.md %}#jobmanager-memory-flink-size))
-or its components for [standalone deployment]({% link 
ops/deployment/cluster_setup.md %}) where you want to declare how much memory
+or its components for [standalone deployment]({% link 
ops/resource-providers/cluster_setup.md %}) where you want to declare how much 
memory
 is given to Flink itself. Additionally, you can adjust *JVM metaspace* if it 
causes [problems]({% link ops/memory/mem_trouble.md 
%}#outofmemoryerror-metaspace).
 
 The *total Process memory* is not relevant because *JVM overhead* is not 
controlled by Flink or the deployment environment,
@@ -42,7 +42,7 @@ only physical resources of the executing machine matter in 
this case.
 
 It is recommended to configure [total process memory]({% link 
ops/memory/mem_setup.md %}#configure-total-memory)
 ([`taskmanager.memory.process.size`]({% link ops/config.md 
%}#taskmanager-memory-process-size) or [`jobmanager.memory.process.size`]({% 
link ops/config.md %}#jobmanager-memory-process-size))
-for the containerized deployments ([Kubernetes]({% link 
ops/deployment/kubernetes.md %}), [Yarn]({% link ops/deployment/yarn_setup.md 
%}) or [Mesos]({% link ops/deployment/mesos.md %})).
+for the containerized deployments ([Kubernetes]({% link 
ops/resource-providers/kubernetes.md %}), [Yarn]({% link 
ops/resource-providers/yarn_setup.md %}) or [Mesos]({% link 
ops/resource-providers/mesos.md %})).
 It declares how much memory in total should be assigned to the Flink *JVM 
process* and corresponds to the size of the requested container.
 
 <span class="label label-info">Note</span> If you configure the *total Flink 
memory* Flink will implicitly add JVM memory components
diff --git a/docs/ops/memory/mem_tuning.zh.md b/docs/ops/memory/mem_tuning.zh.md
index 6dac114..d2c1c9e 100644
--- a/docs/ops/memory/mem_tuning.zh.md
+++ b/docs/ops/memory/mem_tuning.zh.md
@@ -31,7 +31,7 @@ under the License.
 
 ## 独立部署模式(Standalone Deployment)下的内存配置
 
-[独立部署模式]({% link ops/deployment/cluster_setup.zh.md %})下,我们通常更关注 Flink 
应用本身使用的内存大小。
+[独立部署模式]({% link ops/resource-providers/cluster_setup.zh.md %})下,我们通常更关注 Flink 
应用本身使用的内存大小。
 建议配置 [Flink 总内存]({% link ops/memory/mem_setup.zh.md 
%}#configure-total-memory)([`taskmanager.memory.flink.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-flink-size) 或者 
[`jobmanager.memory.flink.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-flink-size.zh.md %}))或其组成部分。
 此外,如果出现 [Metaspace 不足的问题]({% link ops/memory/mem_trouble.zh.md 
%}#outofmemoryerror-metaspace),可以调整 *JVM Metaspace* 的大小。
 
@@ -41,7 +41,7 @@ under the License.
 
 ## 容器(Container)的内存配置
 
-在容器化部署模式(Containerized Deployment)下([Kubernetes]({% link 
ops/deployment/kubernetes.zh.md %})、[Yarn]({% link 
ops/deployment/yarn_setup.zh.md %}) 或 [Mesos]({% link 
ops/deployment/mesos.zh.md %})),建议配置[进程总内存]({% link ops/memory/mem_setup.zh.md 
%}#configure-total-memory)([`taskmanager.memory.process.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-process-size) 或者 
[`jobmanager.memory.process.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-process-size))。
+在容器化部署模式(Containerized Deployment)下([Kubernetes]({% link 
ops/resource-providers/kubernetes.zh.md %})、[Yarn]({% link 
ops/resource-providers/yarn_setup.zh.md %}) 或 [Mesos]({% link 
ops/resource-providers/mesos.zh.md %})),建议配置[进程总内存]({% link 
ops/memory/mem_setup.zh.md 
%}#configure-total-memory)([`taskmanager.memory.process.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-process-size) 或者 
[`jobmanager.memory.process.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-process-size))。
 该配置参数用于指定分配给 Flink *JVM 进程*的总内存,也就是需要申请的容器大小。
 
 <span class="label label-info">提示</span>
diff --git a/docs/ops/python_shell.md b/docs/ops/python_shell.md
index f56a3ba..58e6965 100644
--- a/docs/ops/python_shell.md
+++ b/docs/ops/python_shell.md
@@ -24,7 +24,7 @@ under the License.
 
 Flink comes with an integrated interactive Python Shell.
 It can be used in a local setup as well as in a cluster setup.
-See the [local setup page]({% link ops/deployment/local.md %}) for more 
information about how to setup a local Flink.
+See the [local setup page]({% link ops/resource-providers/local.md %}) for 
more information about how to setup a local Flink.
 You can also [build a local setup from source]({% link flinkDev/building.md 
%}).
 
 <span class="label label-info">Note</span> The Python Shell will run the 
command “python”. Please refer to the Python Table API [installation guide]({% 
link dev/python/installation.md %}) on how to set up the Python execution 
environments.
diff --git a/docs/ops/python_shell.zh.md b/docs/ops/python_shell.zh.md
index 33c80ae..2f70049 100644
--- a/docs/ops/python_shell.zh.md
+++ b/docs/ops/python_shell.zh.md
@@ -24,7 +24,7 @@ under the License.
 
 Flink附带了一个集成的交互式Python Shell。
 它既能够运行在本地启动的local模式,也能够运行在集群启动的cluster模式下。
-本地安装Flink,请看[本地安装]({% link ops/deployment/local.zh.md %})页面。
+本地安装Flink,请看[本地安装]({% link ops/resource-providers/local.zh.md %})页面。
 您也可以从源码安装Flink,请看[从源码构建 Flink]({% link flinkDev/building.zh.md %})页面。
 
 <span class="label label-info">注意</span> Python 
Shell会调用“python”命令。关于Python执行环境的要求,请参考Python Table API[环境安装]({% link 
dev/python/installation.zh.md %})。
diff --git a/docs/ops/deployment/cluster_setup.md 
b/docs/ops/resource-providers/cluster_setup.md
similarity index 100%
rename from docs/ops/deployment/cluster_setup.md
rename to docs/ops/resource-providers/cluster_setup.md
diff --git a/docs/ops/deployment/cluster_setup.zh.md 
b/docs/ops/resource-providers/cluster_setup.zh.md
similarity index 100%
rename from docs/ops/deployment/cluster_setup.zh.md
rename to docs/ops/resource-providers/cluster_setup.zh.md
diff --git a/docs/ops/deployment/docker.md 
b/docs/ops/resource-providers/docker.md
similarity index 100%
rename from docs/ops/deployment/docker.md
rename to docs/ops/resource-providers/docker.md
diff --git a/docs/ops/deployment/docker.zh.md 
b/docs/ops/resource-providers/docker.zh.md
similarity index 100%
rename from docs/ops/deployment/docker.zh.md
rename to docs/ops/resource-providers/docker.zh.md
diff --git a/docs/ops/deployment/hadoop.md 
b/docs/ops/resource-providers/hadoop.md
similarity index 100%
rename from docs/ops/deployment/hadoop.md
rename to docs/ops/resource-providers/hadoop.md
diff --git a/docs/ops/deployment/hadoop.zh.md 
b/docs/ops/resource-providers/hadoop.zh.md
similarity index 100%
rename from docs/ops/deployment/hadoop.zh.md
rename to docs/ops/resource-providers/hadoop.zh.md
diff --git a/docs/ops/deployment/index.md b/docs/ops/resource-providers/index.md
similarity index 100%
rename from docs/ops/deployment/index.md
rename to docs/ops/resource-providers/index.md
diff --git a/docs/ops/deployment/index.zh.md 
b/docs/ops/resource-providers/index.zh.md
similarity index 100%
rename from docs/ops/deployment/index.zh.md
rename to docs/ops/resource-providers/index.zh.md
diff --git a/docs/ops/deployment/kubernetes.md 
b/docs/ops/resource-providers/kubernetes.md
similarity index 100%
rename from docs/ops/deployment/kubernetes.md
rename to docs/ops/resource-providers/kubernetes.md
diff --git a/docs/ops/deployment/kubernetes.zh.md 
b/docs/ops/resource-providers/kubernetes.zh.md
similarity index 100%
rename from docs/ops/deployment/kubernetes.zh.md
rename to docs/ops/resource-providers/kubernetes.zh.md
diff --git a/docs/ops/deployment/local.md b/docs/ops/resource-providers/local.md
similarity index 100%
rename from docs/ops/deployment/local.md
rename to docs/ops/resource-providers/local.md
diff --git a/docs/ops/deployment/local.zh.md 
b/docs/ops/resource-providers/local.zh.md
similarity index 100%
rename from docs/ops/deployment/local.zh.md
rename to docs/ops/resource-providers/local.zh.md
diff --git a/docs/ops/deployment/mesos.md b/docs/ops/resource-providers/mesos.md
similarity index 100%
rename from docs/ops/deployment/mesos.md
rename to docs/ops/resource-providers/mesos.md
diff --git a/docs/ops/deployment/mesos.zh.md 
b/docs/ops/resource-providers/mesos.zh.md
similarity index 100%
rename from docs/ops/deployment/mesos.zh.md
rename to docs/ops/resource-providers/mesos.zh.md
diff --git a/docs/ops/deployment/native_kubernetes.md 
b/docs/ops/resource-providers/native_kubernetes.md
similarity index 97%
rename from docs/ops/deployment/native_kubernetes.md
rename to docs/ops/resource-providers/native_kubernetes.md
index 3ca8ea9..c053d68 100644
--- a/docs/ops/deployment/native_kubernetes.md
+++ b/docs/ops/resource-providers/native_kubernetes.md
@@ -83,8 +83,8 @@ Please refer to the following 
[section](#custom-flink-docker-image).
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-If you want to use a custom Docker image to deploy Flink containers, check 
[the Flink Docker image documentation]({% link ops/deployment/docker.md %}),
-[its tags]({% link ops/deployment/docker.md %}#image-tags), [how to customize 
the Flink Docker image]({% link ops/deployment/docker.md 
%}#customize-flink-image) and [enable plugins]({% link ops/deployment/docker.md 
%}#using-plugins).
+If you want to use a custom Docker image to deploy Flink containers, check 
[the Flink Docker image documentation]({% link ops/resource-providers/docker.md 
%}),
+[its tags]({% link ops/resource-providers/docker.md %}#image-tags), [how to 
customize the Flink Docker image]({% link ops/resource-providers/docker.md 
%}#customize-flink-image) and [enable plugins]({% link 
ops/resource-providers/docker.md %}#using-plugins).
 If you created a custom Docker image you can provide it by setting the 
[`kubernetes.container.image`]({% link ops/config.md 
%}#kubernetes-container-image) configuration option:
 
 {% highlight bash %}
@@ -208,7 +208,7 @@ $ kubectl delete deployment/<ClusterID>
 
 ### Start Flink Application
 <div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job 
and the Flink runtime, which will automatically create and destroy cluster 
components as needed. The Flink community provides base docker images 
[customized]({% link ops/deployment/docker.md %}#customize-flink-image) for any 
use case.
+Application mode allows users to create a single image containing their Job 
and the Flink runtime, which will automatically create and destroy cluster 
components as needed. The Flink community provides base docker images 
[customized]({% link ops/resource-providers/docker.md %}#customize-flink-image) 
for any use case.
 <div data-lang="java" markdown="1">
 {% highlight dockerfile %}
 FROM flink
diff --git a/docs/ops/deployment/native_kubernetes.zh.md 
b/docs/ops/resource-providers/native_kubernetes.zh.md
similarity index 97%
rename from docs/ops/deployment/native_kubernetes.zh.md
rename to docs/ops/resource-providers/native_kubernetes.zh.md
index 76867ee..01d3749 100644
--- a/docs/ops/deployment/native_kubernetes.zh.md
+++ b/docs/ops/resource-providers/native_kubernetes.zh.md
@@ -81,7 +81,7 @@ $ ./bin/kubernetes-session.sh \
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 
-如果要使用自定义的 Docker 镜像部署 Flink 容器,请查看 [Flink Docker 镜像文档]({% link 
ops/deployment/docker.zh.md %})、[镜像 tags]({% link ops/deployment/docker.zh.md 
%}#image-tags)、[如何自定义 Flink Docker 镜像]({% link ops/deployment/docker.zh.md 
%}#customize-flink-image)和[启用插件]({% link ops/deployment/docker.zh.md 
%}#using-plugins)。
+如果要使用自定义的 Docker 镜像部署 Flink 容器,请查看 [Flink Docker 镜像文档]({% link 
ops/resource-providers/docker.zh.md %})、[镜像 tags]({% link 
ops/resource-providers/docker.zh.md %}#image-tags)、[如何自定义 Flink Docker 镜像]({% 
link ops/resource-providers/docker.zh.md %}#customize-flink-image)和[启用插件]({% 
link ops/resource-providers/docker.zh.md %}#using-plugins)。
 如果创建了自定义的 Docker 镜像,则可以通过设置 [`kubernetes.container.image`]({% link 
ops/config.zh.md %}#kubernetes-container-image) 配置项来指定它:
 
 {% highlight bash %}
@@ -205,7 +205,7 @@ $ kubectl delete deployment/<ClusterID>
 ### 启动 Flink Application
 <div class="codetabs" markdown="1">
 
-Application 模式允许用户创建单个镜像,其中包含他们的作业和 Flink 运行时,该镜像将按需自动创建和销毁集群组件。Flink 
社区提供了可以构建[多用途自定义镜像]({% link ops/deployment/docker.zh.md 
%}#customize-flink-image)的基础镜像。
+Application 模式允许用户创建单个镜像,其中包含他们的作业和 Flink 运行时,该镜像将按需自动创建和销毁集群组件。Flink 
社区提供了可以构建[多用途自定义镜像]({% link ops/resource-providers/docker.zh.md 
%}#customize-flink-image)的基础镜像。
 
 <div data-lang="java" markdown="1">
 {% highlight dockerfile %}
diff --git a/docs/ops/deployment/yarn_setup.md 
b/docs/ops/resource-providers/yarn_setup.md
similarity index 100%
rename from docs/ops/deployment/yarn_setup.md
rename to docs/ops/resource-providers/yarn_setup.md
diff --git a/docs/ops/deployment/yarn_setup.zh.md 
b/docs/ops/resource-providers/yarn_setup.zh.md
similarity index 100%
rename from docs/ops/deployment/yarn_setup.zh.md
rename to docs/ops/resource-providers/yarn_setup.zh.md
diff --git a/docs/ops/upgrading.md b/docs/ops/upgrading.md
index 8c82d9c..513c5e9 100644
--- a/docs/ops/upgrading.md
+++ b/docs/ops/upgrading.md
@@ -185,7 +185,7 @@ In this step, we update the framework version of the 
cluster. What this basicall
 the Flink installation with the new version. This step can depend on how you 
are running Flink in your cluster (e.g. 
 standalone, on Mesos, ...).
 
-If you are unfamiliar with installing Flink in your cluster, please read the 
[deployment and cluster setup documentation]({% link 
ops/deployment/cluster_setup.md %}).
+If you are unfamiliar with installing Flink in your cluster, please read the 
[deployment and cluster setup documentation]({% link 
ops/resource-providers/cluster_setup.md %}).
 
 ### STEP 3: Resume the job under the new Flink version from savepoint.
 
diff --git a/docs/ops/upgrading.zh.md b/docs/ops/upgrading.zh.md
index 17d5d6a..b683eea 100644
--- a/docs/ops/upgrading.zh.md
+++ b/docs/ops/upgrading.zh.md
@@ -183,7 +183,7 @@ In this step, we update the framework version of the 
cluster. What this basicall
 the Flink installation with the new version. This step can depend on how you 
are running Flink in your cluster (e.g.
 standalone, on Mesos, ...).
 
-If you are unfamiliar with installing Flink in your cluster, please read the 
[deployment and cluster setup documentation]({% link 
ops/deployment/cluster_setup.zh.md %}).
+If you are unfamiliar with installing Flink in your cluster, please read the 
[deployment and cluster setup documentation]({% link 
ops/resource-providers/cluster_setup.zh.md %}).
 
 ### STEP 3: Resume the job under the new Flink version from savepoint.
 
diff --git a/docs/redirects/aws.md b/docs/redirects/aws.md
index 55c3d42..8c3453f 100644
--- a/docs/redirects/aws.md
+++ b/docs/redirects/aws.md
@@ -2,7 +2,7 @@
 title: "Amazon Web Services (AWS)"
 layout: redirect
 redirect: /index.html
-permalink: /ops/deployment/aws.html
+permalink: /ops/resource-providers/aws.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/redirects/gce_setup.md b/docs/redirects/gce_setup.md
index cfdde17..5579473 100644
--- a/docs/redirects/gce_setup.md
+++ b/docs/redirects/gce_setup.md
@@ -2,7 +2,7 @@
 title: "Google Compute Engine Setup"
 layout: redirect
 redirect: /index.html
-permalink: /ops/deployment/gce_setup.html
+permalink: /ops/resource-providers/gce_setup.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -21,4 +21,4 @@ software distributed under the License is distributed on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
\ No newline at end of file
+-->
diff --git a/docs/redirects/local_setup_tutorial.md 
b/docs/redirects/local_setup_tutorial.md
index 9718865..98dfc75 100644
--- a/docs/redirects/local_setup_tutorial.md
+++ b/docs/redirects/local_setup_tutorial.md
@@ -1,7 +1,7 @@
 ---
 title: "Local Setup Tutorial"
 layout: redirect
-redirect: /ops/deployment/local.html
+redirect: /ops/resource-providers/local.html
 permalink: /getting-started/tutorials/local_setup.html
 ---
 <!--
diff --git a/docs/redirects/mapr.md b/docs/redirects/mapr.md
index f72a969..baf847c 100644
--- a/docs/redirects/mapr.md
+++ b/docs/redirects/mapr.md
@@ -2,7 +2,7 @@
 title: "MapR Setup"
 layout: redirect
 redirect: /index.html
-permalink: /ops/deployment/mapr_setup.html
+permalink: /ops/resource-providers/mapr_setup.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -21,4 +21,4 @@ software distributed under the License is distributed on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
\ No newline at end of file
+-->
diff --git a/docs/redirects/oss.md b/docs/redirects/oss.md
index b9df3e8..20c2459 100644
--- a/docs/redirects/oss.md
+++ b/docs/redirects/oss.md
@@ -2,7 +2,7 @@
 title: "Aliyun Object Storage Service (OSS)"
 layout: redirect
 redirect: /ops/filesystems/oss.html
-permalink: /ops/deployment/oss.html
+permalink: /ops/resource-providers/oss.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/redirects/setup_quickstart.md 
b/docs/redirects/setup_quickstart.md
index 90e2cdc..1dc9f91 100644
--- a/docs/redirects/setup_quickstart.md
+++ b/docs/redirects/setup_quickstart.md
@@ -1,7 +1,7 @@
 ---
 title: "Local Setup Tutorial"
 layout: redirect
-redirect: /ops/deployment/local.html
+redirect: /ops/resource-providers/local.html
 permalink: /quickstart/setup_quickstart.html
 ---
 <!--
diff --git a/docs/redirects/windows_local_setup.md 
b/docs/redirects/windows_local_setup.md
index f65c3eb..1a4dc78 100644
--- a/docs/redirects/windows_local_setup.md
+++ b/docs/redirects/windows_local_setup.md
@@ -1,7 +1,7 @@
 ---
 title: "Running Flink on Windows"
 layout: redirect
-redirect: /ops/deployment/local.html
+redirect: /ops/resource-providers/local.html
 permalink: /getting-started/tutorials/flink_on_windows.html
 ---
 <!--
diff --git a/docs/release-notes/flink-1.11.md b/docs/release-notes/flink-1.11.md
index 7d3e67e..66b79ef 100644
--- a/docs/release-notes/flink-1.11.md
+++ b/docs/release-notes/flink-1.11.md
@@ -32,7 +32,7 @@ these notes carefully if you are planning to upgrade your 
Flink version to 1.11.
 #### Support for Application Mode 
([FLIP-85](https://cwiki.apache.org/confluence/display/FLINK/FLIP-85+Flink+Application+Mode))
 The user can now submit applications and choose to execute their `main()` 
method on the cluster rather than the client.
 This allows for more light-weight application submission. For more details,
-see the [Application Mode 
documentation](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/#application-mode).
+see the [Application Mode 
documentation](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/#application-mode).
  
 #### Web Submission behaves the same as detached mode.
 With [FLINK-16657](https://issues.apache.org/jira/browse/FLINK-16657) the web 
submission logic changes and it exposes
@@ -68,11 +68,11 @@ The examples of `Dockerfiles` and docker image `build.sh` 
scripts have been remo
 - `flink-container/docker`
 - `flink-container/kubernetes`
 
-Check the updated user documentation for [Flink Docker 
integration](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html)
 instead. It now describes in detail how to 
[use](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#how-to-run-a-flink-image)
 and 
[customize](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#customize-flink-image)
 [the Flink official docker image](https://ci.apache.org/project [...]
-- [docker 
run](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#how-to-run-flink-image)
-- [docker 
compose](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#flink-with-docker-compose)
-- [docker 
swarm](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#flink-with-docker-swarm)
-- [standalone 
Kubernetes](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/kubernetes.html)
+Check the updated user documentation for [Flink Docker 
integration](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html)
 instead. It now describes in detail how to 
[use](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#how-to-run-a-flink-image)
 and 
[customize](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#customize-flink-image)
 [the Flink official docker image](https [...]
+- [docker 
run](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#how-to-run-flink-image)
+- [docker 
compose](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#flink-with-docker-compose)
+- [docker 
swarm](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#flink-with-docker-swarm)
+- [standalone 
Kubernetes](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/kubernetes.html)
 
 ### Memory Management
 #### New JobManager Memory Model
diff --git a/docs/release-notes/flink-1.11.zh.md 
b/docs/release-notes/flink-1.11.zh.md
index 4ed3e17..c1a1fec 100644
--- a/docs/release-notes/flink-1.11.zh.md
+++ b/docs/release-notes/flink-1.11.zh.md
@@ -32,7 +32,7 @@ these notes carefully if you are planning to upgrade your 
Flink version to 1.11.
 #### Support for Application Mode 
([FLIP-85](https://cwiki.apache.org/confluence/display/FLINK/FLIP-85+Flink+Application+Mode))
 The user can now submit applications and choose to execute their `main()` 
method on the cluster rather than the client.
 This allows for more light-weight application submission. For more details,
-see the [Application Mode 
documentation](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/#application-mode).
+see the [Application Mode 
documentation](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/#application-mode).
  
 #### Web Submission behaves the same as detached mode.
 With [FLINK-16657](https://issues.apache.org/jira/browse/FLINK-16657) the web 
submission logic changes and it exposes
@@ -68,11 +68,11 @@ The examples of `Dockerfiles` and docker image `build.sh` 
scripts have been remo
 - `flink-container/docker`
 - `flink-container/kubernetes`
 
-Check the updated user documentation for [Flink Docker 
integration](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html)
 instead. It now describes in detail how to 
[use](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#how-to-run-a-flink-image)
 and 
[customize](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#customize-flink-image)
 [the Flink official docker image](https://ci.apache.org/project [...]
-- [docker 
run](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#how-to-run-flink-image)
-- [docker 
compose](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#flink-with-docker-compose)
-- [docker 
swarm](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#flink-with-docker-swarm)
-- [standalone 
Kubernetes](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/kubernetes.html)
+Check the updated user documentation for [Flink Docker 
integration](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html)
 instead. It now describes in detail how to 
[use](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#how-to-run-a-flink-image)
 and 
[customize](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#customize-flink-image)
 [the Flink official docker image](https [...]
+- [docker 
run](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#how-to-run-flink-image)
+- [docker 
compose](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#flink-with-docker-compose)
+- [docker 
swarm](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/docker.html#flink-with-docker-swarm)
+- [standalone 
Kubernetes](https://ci.apache.org/projects/flink/flink-docs-master/ops/resource-providers/kubernetes.html)
 
 ### Memory Management
 #### New JobManager Memory Model

Reply via email to