This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 682bfb08a67e9a5b4ab41e63340f63c1102db42f
Author: Till Rohrmann <[email protected]>
AuthorDate: Wed Nov 25 10:58:18 2020 +0100

    [FLINK-20342][docs] Move plugins under filesystems
---
 docs/deployment/external_resources.md                      | 4 ++--
 docs/deployment/external_resources.zh.md                   | 4 ++--
 docs/deployment/filesystems/index.md                       | 6 +++---
 docs/deployment/filesystems/index.zh.md                    | 6 +++---
 docs/deployment/{ => filesystems}/plugins.md               | 4 ++--
 docs/deployment/{ => filesystems}/plugins.zh.md            | 4 ++--
 docs/deployment/index.md                                   | 2 +-
 docs/deployment/index.zh.md                                | 2 +-
 docs/deployment/resource-providers/docker.md               | 2 +-
 docs/deployment/resource-providers/docker.zh.md            | 2 +-
 docs/deployment/resource-providers/native_kubernetes.md    | 2 +-
 docs/deployment/resource-providers/native_kubernetes.zh.md | 2 +-
 docs/monitoring/metrics.md                                 | 2 +-
 docs/monitoring/metrics.zh.md                              | 2 +-
 14 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/docs/deployment/external_resources.md 
b/docs/deployment/external_resources.md
index 43f31c4..9259111 100644
--- a/docs/deployment/external_resources.md
+++ b/docs/deployment/external_resources.md
@@ -62,7 +62,7 @@ To enable an external resource with the external resource 
framework, you need to
 ## Prepare plugins
 
 You need to prepare the external resource plugin and put it into the 
`plugins/` folder of your Flink distribution, see
-[Flink Plugins]({% link deployment/plugins.md %}). Apache Flink provides a 
first-party [plugin for GPU resources](#plugin-for-gpu-resources). You can also
+[Flink Plugins]({% link deployment/filesystems/plugins.md %}). Apache Flink 
provides a first-party [plugin for GPU resources](#plugin-for-gpu-resources). 
You can also
 [implement a plugin for your custom resource 
type](#implement-a-plugin-for-your-custom-resource-type).
 
 ## Configurations
@@ -240,7 +240,7 @@ and write the factory class name (e.g. 
`your.domain.FPGADriverFactory`) to it.
 
 Then, create a jar which includes `FPGADriver`, `FPGADriverFactory`, 
`META-INF/services/` and all the external dependencies.
 Make a directory in `plugins/` of your Flink distribution with an arbitrary 
name, e.g. "fpga", and put the jar into this directory.
-See [Flink Plugin]({% link deployment/plugins.md %}) for more details.
+See [Flink Plugin]({% link deployment/filesystems/plugins.md %}) for more 
details.
 
 <div class="alert alert-info">
      <strong>Note:</strong> External resources are shared by all operators 
running on the same machine. The community might add external resource 
isolation in a future release.
diff --git a/docs/deployment/external_resources.zh.md 
b/docs/deployment/external_resources.zh.md
index b9a803c..fdeecbb 100644
--- a/docs/deployment/external_resources.zh.md
+++ b/docs/deployment/external_resources.zh.md
@@ -62,7 +62,7 @@ under the License.
 
 ## 准备插件
 
-你需要为使用的扩展资源准备插件,并将其放入 Flink 发行版的 `plugins/` 文件夹中, 参看 [Flink Plugins]({% link 
deployment/plugins.zh.md %})。
+你需要为使用的扩展资源准备插件,并将其放入 Flink 发行版的 `plugins/` 文件夹中, 参看 [Flink Plugins]({% link 
deployment/filesystems/plugins.zh.md %})。
 Flink 提供了第一方的 [GPU 
资源插件](#plugin-for-gpu-resources)。你同样可以为你所使用的扩展资源实现自定义插件[实现自定义插件](#implement-a-plugin-for-your-custom-resource-type)。
 
 ## 配置项
@@ -227,7 +227,7 @@ class FPGAInfo extends ExternalResourceInfo {
 在 `META-INF/services/` 中创建名为 
`org.apache.flink.api.common.externalresource.ExternalResourceDriverFactory` 
的文件,向其中写入工厂类名,如 `your.domain.FPGADriverFactory`。
 
 之后,将 `FPGADriver`,`FPGADriverFactory`,`META-INF/services/` 和所有外部依赖打入 jar 包。在你的 
Flink 发行版的 `plugins/` 文件夹中创建一个名为“fpga”的文件夹,将打好的 jar 包放入其中。
-更多细节请查看 [Flink Plugin]({% link deployment/plugins.zh.md %})。
+更多细节请查看 [Flink Plugin]({% link deployment/filesystems/plugins.zh.md %})。
 
 <div class="alert alert-info">
      <strong>提示:</strong> 扩展资源由运行在同一台机器上的所有算子共享。社区可能会在未来的版本中支持外部资源隔离。
diff --git a/docs/deployment/filesystems/index.md 
b/docs/deployment/filesystems/index.md
index 452480a..446b5e1 100644
--- a/docs/deployment/filesystems/index.md
+++ b/docs/deployment/filesystems/index.md
@@ -60,7 +60,7 @@ The Apache Flink project supports the following file systems:
   - **[Azure Blob Storage]({% link deployment/filesystems/azure.md %})** is 
supported by `flink-azure-fs-hadoop` and registered under the *wasb(s)://* URI 
schemes.
   The implementation is based on the [Hadoop 
Project](https://hadoop.apache.org/) but is self-contained with no dependency 
footprint.
 
-Except **MapR FS**, you can and should use any of them as [plugins]({% link 
deployment/plugins.md %}). 
+Except **MapR FS**, you can and should use any of them as [plugins]({% link 
deployment/filesystems/plugins.md %}). 
 
 To use a pluggable file systems, copy the corresponding JAR file from the 
`opt` directory to a directory under `plugins` directory
 of your Flink distribution before starting Flink, e.g.
@@ -70,13 +70,13 @@ mkdir ./plugins/s3-fs-hadoop
 cp ./opt/flink-s3-fs-hadoop-{{ site.version }}.jar ./plugins/s3-fs-hadoop/
 {% endhighlight %}
 
-<span class="label label-danger">Attention</span> The [plugin]({% link 
deployment/plugins.md %}) mechanism for file systems was introduced in Flink 
version `1.9` to
+<span class="label label-danger">Attention</span> The [plugin]({% link 
deployment/filesystems/plugins.md %}) mechanism for file systems was introduced 
in Flink version `1.9` to
 support dedicated Java class loaders per plugin and to move away from the 
class shading mechanism.
 You can still use the provided file systems (or your own implementations) via 
the old mechanism by copying the corresponding
 JAR file into `lib` directory. However, **since 1.10, s3 plugins must be 
loaded through the plugin mechanism**; the old
 way no longer works as these plugins are not shaded anymore (or more 
specifically the classes are not relocated since 1.10).
 
-It's encouraged to use the [plugins]({% link deployment/plugins.md %})-based 
loading mechanism for file systems that support it. Loading file systems 
components from the `lib`
+It's encouraged to use the [plugins]({% link deployment/filesystems/plugins.md 
%})-based loading mechanism for file systems that support it. Loading file 
systems components from the `lib`
 directory will not supported in future Flink versions.
 
 ## Adding a new pluggable File System implementation
diff --git a/docs/deployment/filesystems/index.zh.md 
b/docs/deployment/filesystems/index.zh.md
index 087602e..d9106db 100644
--- a/docs/deployment/filesystems/index.zh.md
+++ b/docs/deployment/filesystems/index.zh.md
@@ -51,7 +51,7 @@ Apache Flink 支持下列文件系统:
 
   - **[Azure Blob Storage]({% link deployment/filesystems/azure.zh.md %})** 
由`flink-azure-fs-hadoop` 支持,并通过 *wasb(s)://* URI scheme 使用。该实现基于 [Hadoop 
Project](https://hadoop.apache.org/),但其是独立的,没有依赖项。
 
-除 **MapR FS** 之外,上述文件系统可以并且需要作为[插件]({% link deployment/plugins.zh.md %})使用。
+除 **MapR FS** 之外,上述文件系统可以并且需要作为[插件]({% link 
deployment/filesystems/plugins.zh.md %})使用。
 
 使用外部文件系统时,在启动 Flink 之前需将对应的 JAR 文件从 `opt` 目录复制到 Flink 发行版 `plugin` 
目录下的某一文件夹中,例如:
 
@@ -60,9 +60,9 @@ mkdir ./plugins/s3-fs-hadoop
 cp ./opt/flink-s3-fs-hadoop-{{ site.version }}.jar ./plugins/s3-fs-hadoop/
 {% endhighlight %}
 
-<span class="label label-danger">注意</span> 文件系统的[插件]({% link 
deployment/plugins.zh.md %})机制在 Flink 版本 1.9 中引入,以支持每个插件专有 Java 
类加载器,并避免类隐藏机制。您仍然可以通过旧机制使用文件系统,即将对应的 JAR 文件复制到 `lib` 目录中,或使用您自己的实现方式,但是从版本 1.10 
开始,**S3 插件必须通过插件机制加载**,因为这些插件不再被隐藏(版本 1.10 之后类不再被重定位),旧机制不再可用。
+<span class="label label-danger">注意</span> 文件系统的[插件]({% link 
deployment/filesystems/plugins.zh.md %})机制在 Flink 版本 1.9 中引入,以支持每个插件专有 Java 
类加载器,并避免类隐藏机制。您仍然可以通过旧机制使用文件系统,即将对应的 JAR 文件复制到 `lib` 目录中,或使用您自己的实现方式,但是从版本 1.10 
开始,**S3 插件必须通过插件机制加载**,因为这些插件不再被隐藏(版本 1.10 之后类不再被重定位),旧机制不再可用。
 
-尽可能通过基于[插件]({% link deployment/plugins.zh.md %})的加载机制使用支持的文件系统。未来的 Flink 
版本将不再支持通过 `lib` 目录加载文件系统组件。
+尽可能通过基于[插件]({% link deployment/filesystems/plugins.zh.md %})的加载机制使用支持的文件系统。未来的 
Flink 版本将不再支持通过 `lib` 目录加载文件系统组件。
 
 ## 添加新的外部文件系统实现
 
diff --git a/docs/deployment/plugins.md b/docs/deployment/filesystems/plugins.md
similarity index 97%
rename from docs/deployment/plugins.md
rename to docs/deployment/filesystems/plugins.md
index e46d6c1..8f342c1 100644
--- a/docs/deployment/plugins.md
+++ b/docs/deployment/filesystems/plugins.md
@@ -1,7 +1,7 @@
 ---
 title: "Plugins"
 nav-id: plugins
-nav-parent_id: deployment
+nav-parent_id: filesystems
 nav-pos: 4
 ---
 <!--
@@ -70,7 +70,7 @@ possible across Flink core, plugins, and user code.
 
 ## File Systems
 
-All [file systems](filesystems) **except MapR** are pluggable. That means they 
can and should
+All [file systems]({%link deployment/filesystems/index.md %}) **except MapR** 
are pluggable. That means they can and should
 be used as plugins. To use a pluggable file system, copy the corresponding JAR 
file from the `opt`
 directory to a directory under `plugins` directory of your Flink distribution 
before starting Flink,
 e.g.
diff --git a/docs/deployment/plugins.zh.md 
b/docs/deployment/filesystems/plugins.zh.md
similarity index 97%
rename from docs/deployment/plugins.zh.md
rename to docs/deployment/filesystems/plugins.zh.md
index e46d6c1..8f342c1 100644
--- a/docs/deployment/plugins.zh.md
+++ b/docs/deployment/filesystems/plugins.zh.md
@@ -1,7 +1,7 @@
 ---
 title: "Plugins"
 nav-id: plugins
-nav-parent_id: deployment
+nav-parent_id: filesystems
 nav-pos: 4
 ---
 <!--
@@ -70,7 +70,7 @@ possible across Flink core, plugins, and user code.
 
 ## File Systems
 
-All [file systems](filesystems) **except MapR** are pluggable. That means they 
can and should
+All [file systems]({%link deployment/filesystems/index.md %}) **except MapR** 
are pluggable. That means they can and should
 be used as plugins. To use a pluggable file system, copy the corresponding JAR 
file from the `opt`
 directory to a directory under `plugins` directory of your Flink distribution 
before starting Flink,
 e.g.
diff --git a/docs/deployment/index.md b/docs/deployment/index.md
index 48ef728..77c5e66 100644
--- a/docs/deployment/index.md
+++ b/docs/deployment/index.md
@@ -259,7 +259,7 @@ applications. These approaches differ based on the 
deployment mode and target, b
 To provide a dependency, there are the following options:
 - files in the **`lib/` folder** are added to the classpath used to start 
Flink. It is suitable for libraries such as Hadoop or file systems not 
available as plugins. Beware that classes added here can potentially interfere 
with Flink, for example if you are adding a different version of a library 
already provided by Flink.
 
-- **`plugins/<name>/`** are loaded at runtime by Flink through separate 
classloaders to avoid conflicts with classes loaded and used by Flink. Only jar 
files which are prepared as [plugins]({% link deployment/plugins.md %}) can be 
added here.
+- **`plugins/<name>/`** are loaded at runtime by Flink through separate 
classloaders to avoid conflicts with classes loaded and used by Flink. Only jar 
files which are prepared as [plugins]({% link deployment/filesystems/plugins.md 
%}) can be added here.
 
 ### Download Maven dependencies locally
 
diff --git a/docs/deployment/index.zh.md b/docs/deployment/index.zh.md
index 4428219..955abe5 100644
--- a/docs/deployment/index.zh.md
+++ b/docs/deployment/index.zh.md
@@ -259,7 +259,7 @@ applications. These approaches differ based on the 
deployment mode and target, b
 To provide a dependency, there are the following options:
 - files in the **`lib/` folder** are added to the classpath used to start 
Flink. It is suitable for libraries such as Hadoop or file systems not 
available as plugins. Beware that classes added here can potentially interfere 
with Flink, for example if you are adding a different version of a library 
already provided by Flink.
 
-- **`plugins/<name>/`** are loaded at runtime by Flink through separate 
classloaders to avoid conflicts with classes loaded and used by Flink. Only jar 
files which are prepared as [plugins]({% link deployment/plugins.zh.md %}) can 
be added here.
+- **`plugins/<name>/`** are loaded at runtime by Flink through separate 
classloaders to avoid conflicts with classes loaded and used by Flink. Only jar 
files which are prepared as [plugins]({% link 
deployment/filesystems/plugins.zh.md %}) can be added here.
 
 ### Download Maven dependencies locally
 
diff --git a/docs/deployment/resource-providers/docker.md 
b/docs/deployment/resource-providers/docker.md
index fca0c98..01e1c08 100644
--- a/docs/deployment/resource-providers/docker.md
+++ b/docs/deployment/resource-providers/docker.md
@@ -254,7 +254,7 @@ The `flink-conf.yaml` file must have write permission so 
that the Docker entry p
 
 ### Using plugins
 
-As described in the [plugins]({% link deployment/plugins.md %}) documentation 
page: in order to use plugins they must be
+As described in the [plugins]({% link deployment/filesystems/plugins.md %}) 
documentation page: in order to use plugins they must be
 copied to the correct location in the Flink installation in the Docker 
container for them to work.
 
 If you want to enable plugins provided with Flink (in the `opt/` directory of 
the Flink distribution), you can pass the environment variable 
`ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
diff --git a/docs/deployment/resource-providers/docker.zh.md 
b/docs/deployment/resource-providers/docker.zh.md
index 9ecf56a..5cc3573 100644
--- a/docs/deployment/resource-providers/docker.zh.md
+++ b/docs/deployment/resource-providers/docker.zh.md
@@ -254,7 +254,7 @@ The `flink-conf.yaml` file must have write permission so 
that the Docker entry p
 
 ### Using plugins
 
-As described in the [plugins]({% link deployment/plugins.zh.md %}) 
documentation page: in order to use plugins they must be
+As described in the [plugins]({% link deployment/filesystems/plugins.zh.md %}) 
documentation page: in order to use plugins they must be
 copied to the correct location in the Flink installation in the Docker 
container for them to work.
 
 If you want to enable plugins provided with Flink (in the `opt/` directory of 
the Flink distribution), you can pass the environment variable 
`ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
diff --git a/docs/deployment/resource-providers/native_kubernetes.md 
b/docs/deployment/resource-providers/native_kubernetes.md
index 29316b2..87a76c7 100644
--- a/docs/deployment/resource-providers/native_kubernetes.md
+++ b/docs/deployment/resource-providers/native_kubernetes.md
@@ -287,7 +287,7 @@ If the pod is running, you can also use `kubectl exec -it 
<PodName> bash` to tun
 
 ## Using plugins
 
-In order to use [plugins]({% link deployment/plugins.md %}), they must be 
copied to the correct location in the Flink JobManager/TaskManager pod for them 
to work. 
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they 
must be copied to the correct location in the Flink JobManager/TaskManager pod 
for them to work. 
 You can use the built-in plugins without mounting a volume or building a 
custom Docker image.
 For example, use the following command to pass the environment variable to 
enable the S3 plugin for your Flink application.
 
diff --git a/docs/deployment/resource-providers/native_kubernetes.zh.md 
b/docs/deployment/resource-providers/native_kubernetes.zh.md
index 00f9f05..b63b30f 100644
--- a/docs/deployment/resource-providers/native_kubernetes.zh.md
+++ b/docs/deployment/resource-providers/native_kubernetes.zh.md
@@ -286,7 +286,7 @@ STDOUT 和 STDERR 只会输出到console。你可以使用 `kubectl logs <PodNam
 
 ## 启用插件
 
-为了使用[插件]({% link deployment/plugins.zh.md 
%}),必须要将相应的Jar包拷贝到JobManager和TaskManager Pod里的对应目录。
+为了使用[插件]({% link deployment/filesystems/plugins.zh.md 
%}),必须要将相应的Jar包拷贝到JobManager和TaskManager Pod里的对应目录。
 使用内置的插件就不需要再挂载额外的存储卷或者构建自定义镜像。
 例如,可以使用如下命令通过设置环境变量来给你的Flink应用启用S3插件。
 
diff --git a/docs/monitoring/metrics.md b/docs/monitoring/metrics.md
index 91200f6..a7313be 100644
--- a/docs/monitoring/metrics.md
+++ b/docs/monitoring/metrics.md
@@ -586,7 +586,7 @@ metrics.reporter.my_other_reporter.port: 10000
 {% endhighlight %}
 
 **Important:** The jar containing the reporter must be accessible when Flink 
is started. Reporters that support the
- `factory.class` property can be loaded as [plugins]({% link 
deployment/plugins.md %}). Otherwise the jar must be placed
+ `factory.class` property can be loaded as [plugins]({% link 
deployment/filesystems/plugins.md %}). Otherwise the jar must be placed
  in the /lib folder. Reporters that are shipped with Flink (i.e., all 
reporters documented on this page) are available
  by default.
 
diff --git a/docs/monitoring/metrics.zh.md b/docs/monitoring/metrics.zh.md
index 5f8fc0e..bb1872b 100644
--- a/docs/monitoring/metrics.zh.md
+++ b/docs/monitoring/metrics.zh.md
@@ -586,7 +586,7 @@ metrics.reporter.my_other_reporter.port: 10000
 {% endhighlight %}
 
 **Important:** The jar containing the reporter must be accessible when Flink 
is started. Reporters that support the
- `factory.class` property can be loaded as [plugins]({% link 
deployment/plugins.zh.md %}). Otherwise the jar must be placed
+ `factory.class` property can be loaded as [plugins]({% link 
deployment/filesystems/plugins.zh.md %}). Otherwise the jar must be placed
  in the /lib folder. Reporters that are shipped with Flink (i.e., all 
reporters documented on this page) are available
  by default.
 

Reply via email to