zck573693104 commented on a change in pull request #16295:
URL: https://github.com/apache/flink/pull/16295#discussion_r663961148



##########
File path: docs/content.zh/docs/deployment/cli.md
##########
@@ -25,37 +25,31 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+<a name="Command-Line Interface"> </a>
 # 命令行界面
 
-Flink provides a Command-Line Interface (CLI) `bin/flink` to run programs that 
-are packaged as JAR files and to control their execution. The CLI is part of 
any 
-Flink setup, available in local single node setups and in distributed setups. 
-It connects to the running JobManager specified in `conf/flink-config.yaml`.
+Flink提供了命令界面(CLI)`bin/flink` 来运行 JAR 格式的程序,同时控制其执行。该 CLI 作为所有 Flink 
安装配置的一部分,在单节点或分布式安装的方式中都可以使用。命令行程序与运行中的 JobManager 建立连接来通信,JobManager 
的连接信息可以通过`conf/flink-config.yaml`指定。

Review comment:
       “Flink提供了命令界面”
   Flink提供了命令行界面。加一个“行”是不是好点。
   该 CLI 作为所有 Flink 安装配置的一部分
   “所有”去掉是不是好点。

##########
File path: docs/content.zh/docs/deployment/cli.md
##########
@@ -139,26 +134,25 @@ Waiting for response...
 Savepoint '/tmp/flink-savepoints/savepoint-cca7bc-bb1e257f0dab' disposed.
 ```
 
-If you use custom state instances (for example custom reducing state or 
RocksDB state), you have to 
-specify the path to the program JAR with which the savepoint was triggered. 
Otherwise, you will run 
-into a `ClassNotFoundException`:
+如果使用自定义状态实例(例如自定义 reducing 状态或 RocksDB 状态),则必须指定触发 savepoint 的  JAR 
程序路径。否则,会遇到 `ClassNotFoundException` 异常 :
+
 ```bash
 $ ./bin/flink savepoint \
       --dispose <savepointPath> \ 
       --jarfile <jarFile>
 ```
 
-Triggering the savepoint disposal through the `savepoint` action does not only 
remove the data from 
-the storage but makes Flink clean up the savepoint-related metadata as well.
+通过 `savepoint` 操作触发 savepoint 废弃,不仅会将数据从存储中删除,还会使 Flink 清理与 savepoint 相关的元数据。
+
+<a name="terminating-a-savepoint"> </a>
 
-### Terminating a Job
+### 终止作业
 
-#### Stopping a Job Gracefully Creating a Final Savepoint
+<a name="stopping-a-job-gracefully-creating-a-final-savepoint"> </a>
 
-Another action for stopping a job is `stop`. It is a more graceful way of 
stopping a running streaming 
-job as the `stop`  flows from source to sink. When the user requests to stop a 
job, all sources will 
-be requested to send the last checkpoint barrier that will trigger a 
savepoint, and after the successful 
-completion of that savepoint, they will finish by calling their `cancel()` 
method. 
+#### 优雅地终止作业并创建最终 Savepoint
+
+终止作业运行的操作是 `stop`。`stop` 操作停止从 source 到 sink 的作业流,是一种更优雅的终止方式。当用户请求终止一项作业时,所有的 
sources 将被要求发送最后的 checkpoint 障碍,这会触发创建 checkpoint ,在成功完成 checkpoint 的创建后,Flink 
会调用 `cancel()` 方法来终止作业。

Review comment:
       “checkpoint 障碍” 修改为“checkpoint 屏障”是不是更好点

##########
File path: docs/content.zh/docs/deployment/cli.md
##########
@@ -226,191 +220,179 @@ Using standalone source with error rate 0.000000 and 
sleep delay 1 millis
 Job has been submitted with JobID 97b20a0a8ffd5c1d656328b0cd6436a6
 ```
 
-See how the command is equal to the [initial run command](#submitting-a-job) 
except for the 
-`--fromSavepoint` parameter which is used to refer to the state of the 
-[previously stopped 
job](#stopping-a-job-gracefully-creating-a-final-savepoint). A new JobID is 
-generated that can be used to maintain the job.
+请注意,该命令除了使用 `-fromSavepoint` 
参数关联[之前停止作业](#stopping-a-job-gracefully-creating-a-final-savepoint)的状态外,其它参数都与[初始
 run 命令](#submitting-a-job)相同。该操作会生成一个新的 JobID,用于维护作业的运行。
+
 
-By default, we try to match the whole savepoint state to the job being 
submitted. If you want to 
-allow to skip savepoint state that cannot be restored with the new job you can 
set the 
-`--allowNonRestoredState` flag. You need to allow this if you removed an 
operator from your program 
-that was part of the program when the savepoint was triggered and you still 
want to use the savepoint.
+默认情况下,Flink 尝试将新提交的作业恢复到完整的 savepoint 状态。如果你想忽略不能随新作业恢复的 savepoint 状态,可以设置 
`--allowNonRestoredState` 标志。当你删除了程序的某个操作,同时该操作是创建 savepoint 
时对应程序的一部分,这种情况下,如果你仍想使用 savepoint,就需要设置此参数。
 
 ```bash
 $ ./bin/flink run \
       --fromSavepoint <savepointPath> \
       --allowNonRestoredState ...
 ```
-This is useful if your program dropped an operator that was part of the 
savepoint.
+如果你的程序删除了相应 savepoint 的部分运算操作,使用该选项将很有帮助。
 
 {{< top >}}
 
-## CLI Actions
+<a name="cli-actions"> </a>
+
+## CLI 操作
+
+以下是 Flink CLI 工具支持操作的概览:
 
-Here's an overview of actions supported by Flink's CLI tool:
 <table class="table table-bordered">
     <thead>
         <tr>
-          <th class="text-left" style="width: 25%">Action</th>
-          <th class="text-left" style="width: 50%">Purpose</th>
+          <th class="text-left" style="width: 25%">操作</th>
+          <th class="text-left" style="width: 50%">目的</th>
         </tr>
     </thead>
     <tbody>
         <tr>
             <td><code class="highlighter-rouge">run</code></td>
             <td>
-                This action executes jobs. It requires at least the jar 
containing the job. Flink-
-                or job-related arguments can be passed if necessary.
+                该操作用于执行作业。必须指定包含作业的 jar 包。如有必要,可以传递与 Flink 或作业相关的参数。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">run-application</code></td>
             <td>
-                This action executes jobs in <a href="{{< ref 
"docs/deployment/overview" >}}#application-mode">
-                Application Mode</a>. Other than that, it requires the same 
parameters as the 
-                <code class="highlighter-rouge">run</code> action.
+                该操作用于在 <a href="{{< ref "docs/deployment/overview" 
>}}#application-mode">Application 模式</a>下执行作业。除此之外,它与 <code 
class="highlighter-rouge">run</code> 操作的参数相同。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">info</code></td>
             <td>
-                This action can be used to print an optimized execution graph 
of the passed job. Again,
-                the jar containing the job needs to be passed.
+                该操作用于打印作业相关的优化执行图。同样需要指定包含作业的 jar。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">list</code></td>
             <td>
-                This action lists all running or scheduled jobs.
+                该操作用于列出所有正在运行或调度中的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">savepoint</code></td>
             <td>
-                This action can be used to create or disposing savepoints for 
a given job. It might be
-                necessary to specify a savepoint directory besides the JobID, 
if the 
-                <a href="{{< ref "docs/deployment/config" 
>}}#state-savepoints-dir">state.savepoints.dir</a> 
-                parameter was not specified in <code 
class="highlighter-rouge">conf/flink-config.yaml</code>.
+                该操作用于为指定的作业创建或废弃 savepoint。如果在 <code 
class="highlighter-rouge">conf/flink-config.yaml</code> 中没有指定 <a href="{{< ref 
"docs/deployment/config" >}}#state-savepoints-dir">state.savepoints.dir</a> 
参数,那么除了指定 JobID 之外还需要指定 savepoint 目录。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">cancel</code></td>
             <td>
-                This action can be used to cancel running jobs based on their 
JobID.
+                该操作用于根据作业 JobID 取消正在运行的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">stop</code></td>
             <td>
-                This action combines the <code 
class="highlighter-rouge">cancel</code> and 
-                <code class="highlighter-rouge">savepoint</code> actions to 
stop a running job 
-                but also create a savepoint to start from again.
+                该操作结合了 <code class="highlighter-rouge">cancel</code> 和 <code 
class="highlighter-rouge">savepoint</code> 的功能,停止运行作业的同时会创建用于恢复作业的 savepoint 。
             </td>
         </tr>
     </tbody>
 </table>
 
-A more fine-grained description of all actions and their parameters can be 
accessed through `bin/flink --help` 
-or the usage information of each individual action `bin/flink <action> --help`.
+
+可以通过 `bin/flink --help` 查看所有支持的操作以及操作相关参数的详细信息,也可以通过 `bin/flink <action> 
--help` 单独查看指定操作的使用信息。
 
 {{< top >}}
 
-## Advanced CLI
- 
+<a name="advanced-cli"> </a>
+
+## 高级的 CLI
+
+<a name="rest-api"> </a>
+
 ### REST API
 
-The Flink cluster can be also managed using the [REST API]({{< ref 
"docs/ops/rest_api" >}}). The commands 
-described in previous sections are a subset of what is offered by Flink's REST 
endpoints. Therefore, 
-tools like `curl` can be used to get even more out of Flink.
+Flink 集群也可以使用 [REST API]({{< ref "docs/ops/rest_api" >}}) 进行管理。前面章节描述的命令是 
Flink  REST 服务端支持命令的子集。
+
+因此,可以使用 `curl`  之类的工具来进一步发挥 Flink 的作用。
+
+<a name="selecting-deployment-targets"> </a>
+
+### 选择部署方式
 
-### Selecting Deployment Targets
+Flink 兼容多种集群管理框架,例如 [Kubernetes]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 和 [YARN]({{< ref 
"docs/deployment/resource-providers/yarn" >}}),在 Resource Provider 
章节有更详细的描述。可以在不同的 [Deployment Modes]({{< ref "docs/deployment/overview" 
>}}#deployment-modes) 下提交作业。作业提交相关的参数化因底层框架和部署模式的不同而不同。
 
-Flink is compatible with multiple cluster management frameworks like 
-[Kubernetes]({{< ref "docs/deployment/resource-providers/native_kubernetes" 
>}}) or 
-[YARN]({{< ref "docs/deployment/resource-providers/yarn" >}}) which are 
described in more detail in the 
-Resource Provider section. Jobs can be submitted in different [Deployment 
Modes]({{< ref "docs/deployment/overview" >}}#deployment-modes). 
-The parameterization of a job submission differs based on the underlying 
framework and Deployment Mode. 
+`bin/flink` 提供了`--target` 参数来设置不同的选项。除此之外,仍然必须使用  `run`(针对 [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 和 [Per-Job Mode]({{< ref 
"docs/deployment/overview" >}}#per-job-mode))或 `run-application` (针对 
[Application Mode]({{< ref "docs/deployment/overview" 
>}}#application-mode))提交作业。
+
+下面的参数组合的总结:
 
-`bin/flink` offers a parameter `--target` to handle the different options. In 
addition to that, jobs 
-have to be submitted using either `run` (for [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 
-and [Per-Job Mode]({{< ref "docs/deployment/overview" >}}#per-job-mode)) or 
`run-application` (for 
-[Application Mode]({{< ref "docs/deployment/overview" >}}#application-mode)). 
See the following summary of 
-parameter combinations: 
 * YARN
-  * `./bin/flink run --target yarn-session`: Submission to an already running 
Flink on YARN cluster
-  * `./bin/flink run --target yarn-per-job`: Submission spinning up a Flink on 
YARN cluster in Per-Job Mode
-  * `./bin/flink run-application --target yarn-application`: Submission 
spinning up Flink on YARN cluster in Application Mode
+  * `./bin/flink run --target yarn-session`: 将作业以 `Session` 模式提交到 YARN 集群上运行的 
Flink。
+  * `./bin/flink run --target yarn-per-job`: 将作业以 `Per-Job` 模式提交到 Flink,会基于 
YARN 集群新启动一个对应 Flink。
+  * `./bin/flink run-application --target yarn-application`: 将作业以 
`yarn-application` 模式提交到 Flink,会基于 YARN 集群新启动一个对应 Flink。

Review comment:
       “将作业以 yarn-application 模式提交到 Flink,会基于 YARN 集群新启动一个对应 Flink。”
   修改为
   “作业以 yarn-application 模式提交,会基于 YARN 集群新启动一个对应 的Flink job。”
   是不是好点

##########
File path: docs/content.zh/docs/deployment/cli.md
##########
@@ -226,191 +220,179 @@ Using standalone source with error rate 0.000000 and 
sleep delay 1 millis
 Job has been submitted with JobID 97b20a0a8ffd5c1d656328b0cd6436a6
 ```
 
-See how the command is equal to the [initial run command](#submitting-a-job) 
except for the 
-`--fromSavepoint` parameter which is used to refer to the state of the 
-[previously stopped 
job](#stopping-a-job-gracefully-creating-a-final-savepoint). A new JobID is 
-generated that can be used to maintain the job.
+请注意,该命令除了使用 `-fromSavepoint` 
参数关联[之前停止作业](#stopping-a-job-gracefully-creating-a-final-savepoint)的状态外,其它参数都与[初始
 run 命令](#submitting-a-job)相同。该操作会生成一个新的 JobID,用于维护作业的运行。
+
 
-By default, we try to match the whole savepoint state to the job being 
submitted. If you want to 
-allow to skip savepoint state that cannot be restored with the new job you can 
set the 
-`--allowNonRestoredState` flag. You need to allow this if you removed an 
operator from your program 
-that was part of the program when the savepoint was triggered and you still 
want to use the savepoint.
+默认情况下,Flink 尝试将新提交的作业恢复到完整的 savepoint 状态。如果你想忽略不能随新作业恢复的 savepoint 状态,可以设置 
`--allowNonRestoredState` 标志。当你删除了程序的某个操作,同时该操作是创建 savepoint 
时对应程序的一部分,这种情况下,如果你仍想使用 savepoint,就需要设置此参数。
 
 ```bash
 $ ./bin/flink run \
       --fromSavepoint <savepointPath> \
       --allowNonRestoredState ...
 ```
-This is useful if your program dropped an operator that was part of the 
savepoint.
+如果你的程序删除了相应 savepoint 的部分运算操作,使用该选项将很有帮助。
 
 {{< top >}}
 
-## CLI Actions
+<a name="cli-actions"> </a>
+
+## CLI 操作
+
+以下是 Flink CLI 工具支持操作的概览:
 
-Here's an overview of actions supported by Flink's CLI tool:
 <table class="table table-bordered">
     <thead>
         <tr>
-          <th class="text-left" style="width: 25%">Action</th>
-          <th class="text-left" style="width: 50%">Purpose</th>
+          <th class="text-left" style="width: 25%">操作</th>
+          <th class="text-left" style="width: 50%">目的</th>
         </tr>
     </thead>
     <tbody>
         <tr>
             <td><code class="highlighter-rouge">run</code></td>
             <td>
-                This action executes jobs. It requires at least the jar 
containing the job. Flink-
-                or job-related arguments can be passed if necessary.
+                该操作用于执行作业。必须指定包含作业的 jar 包。如有必要,可以传递与 Flink 或作业相关的参数。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">run-application</code></td>
             <td>
-                This action executes jobs in <a href="{{< ref 
"docs/deployment/overview" >}}#application-mode">
-                Application Mode</a>. Other than that, it requires the same 
parameters as the 
-                <code class="highlighter-rouge">run</code> action.
+                该操作用于在 <a href="{{< ref "docs/deployment/overview" 
>}}#application-mode">Application 模式</a>下执行作业。除此之外,它与 <code 
class="highlighter-rouge">run</code> 操作的参数相同。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">info</code></td>
             <td>
-                This action can be used to print an optimized execution graph 
of the passed job. Again,
-                the jar containing the job needs to be passed.
+                该操作用于打印作业相关的优化执行图。同样需要指定包含作业的 jar。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">list</code></td>
             <td>
-                This action lists all running or scheduled jobs.
+                该操作用于列出所有正在运行或调度中的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">savepoint</code></td>
             <td>
-                This action can be used to create or disposing savepoints for 
a given job. It might be
-                necessary to specify a savepoint directory besides the JobID, 
if the 
-                <a href="{{< ref "docs/deployment/config" 
>}}#state-savepoints-dir">state.savepoints.dir</a> 
-                parameter was not specified in <code 
class="highlighter-rouge">conf/flink-config.yaml</code>.
+                该操作用于为指定的作业创建或废弃 savepoint。如果在 <code 
class="highlighter-rouge">conf/flink-config.yaml</code> 中没有指定 <a href="{{< ref 
"docs/deployment/config" >}}#state-savepoints-dir">state.savepoints.dir</a> 
参数,那么除了指定 JobID 之外还需要指定 savepoint 目录。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">cancel</code></td>
             <td>
-                This action can be used to cancel running jobs based on their 
JobID.
+                该操作用于根据作业 JobID 取消正在运行的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">stop</code></td>
             <td>
-                This action combines the <code 
class="highlighter-rouge">cancel</code> and 
-                <code class="highlighter-rouge">savepoint</code> actions to 
stop a running job 
-                but also create a savepoint to start from again.
+                该操作结合了 <code class="highlighter-rouge">cancel</code> 和 <code 
class="highlighter-rouge">savepoint</code> 的功能,停止运行作业的同时会创建用于恢复作业的 savepoint 。
             </td>
         </tr>
     </tbody>
 </table>
 
-A more fine-grained description of all actions and their parameters can be 
accessed through `bin/flink --help` 
-or the usage information of each individual action `bin/flink <action> --help`.
+
+可以通过 `bin/flink --help` 查看所有支持的操作以及操作相关参数的详细信息,也可以通过 `bin/flink <action> 
--help` 单独查看指定操作的使用信息。
 
 {{< top >}}
 
-## Advanced CLI
- 
+<a name="advanced-cli"> </a>
+
+## 高级的 CLI
+
+<a name="rest-api"> </a>
+
 ### REST API
 
-The Flink cluster can be also managed using the [REST API]({{< ref 
"docs/ops/rest_api" >}}). The commands 
-described in previous sections are a subset of what is offered by Flink's REST 
endpoints. Therefore, 
-tools like `curl` can be used to get even more out of Flink.
+Flink 集群也可以使用 [REST API]({{< ref "docs/ops/rest_api" >}}) 进行管理。前面章节描述的命令是 
Flink  REST 服务端支持命令的子集。
+
+因此,可以使用 `curl`  之类的工具来进一步发挥 Flink 的作用。
+
+<a name="selecting-deployment-targets"> </a>
+
+### 选择部署方式
 
-### Selecting Deployment Targets
+Flink 兼容多种集群管理框架,例如 [Kubernetes]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 和 [YARN]({{< ref 
"docs/deployment/resource-providers/yarn" >}}),在 Resource Provider 
章节有更详细的描述。可以在不同的 [Deployment Modes]({{< ref "docs/deployment/overview" 
>}}#deployment-modes) 下提交作业。作业提交相关的参数化因底层框架和部署模式的不同而不同。
 
-Flink is compatible with multiple cluster management frameworks like 
-[Kubernetes]({{< ref "docs/deployment/resource-providers/native_kubernetes" 
>}}) or 
-[YARN]({{< ref "docs/deployment/resource-providers/yarn" >}}) which are 
described in more detail in the 
-Resource Provider section. Jobs can be submitted in different [Deployment 
Modes]({{< ref "docs/deployment/overview" >}}#deployment-modes). 
-The parameterization of a job submission differs based on the underlying 
framework and Deployment Mode. 
+`bin/flink` 提供了`--target` 参数来设置不同的选项。除此之外,仍然必须使用  `run`(针对 [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 和 [Per-Job Mode]({{< ref 
"docs/deployment/overview" >}}#per-job-mode))或 `run-application` (针对 
[Application Mode]({{< ref "docs/deployment/overview" 
>}}#application-mode))提交作业。
+
+下面的参数组合的总结:
 
-`bin/flink` offers a parameter `--target` to handle the different options. In 
addition to that, jobs 
-have to be submitted using either `run` (for [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 
-and [Per-Job Mode]({{< ref "docs/deployment/overview" >}}#per-job-mode)) or 
`run-application` (for 
-[Application Mode]({{< ref "docs/deployment/overview" >}}#application-mode)). 
See the following summary of 
-parameter combinations: 
 * YARN
-  * `./bin/flink run --target yarn-session`: Submission to an already running 
Flink on YARN cluster
-  * `./bin/flink run --target yarn-per-job`: Submission spinning up a Flink on 
YARN cluster in Per-Job Mode
-  * `./bin/flink run-application --target yarn-application`: Submission 
spinning up Flink on YARN cluster in Application Mode
+  * `./bin/flink run --target yarn-session`: 将作业以 `Session` 模式提交到 YARN 集群上运行的 
Flink。
+  * `./bin/flink run --target yarn-per-job`: 将作业以 `Per-Job` 模式提交到 Flink,会基于 
YARN 集群新启动一个对应 Flink。

Review comment:
       “将作业以 `Per-Job` 模式提交到 Flink,会基于 YARN 集群新启动一个对应 Flink。”
   修改为
   “作业以 `Per-Job` 模式提交,会基于 YARN 集群新启动一个对应 的Flink job。”
   是不是好点

##########
File path: docs/content.zh/docs/deployment/cli.md
##########
@@ -226,191 +220,179 @@ Using standalone source with error rate 0.000000 and 
sleep delay 1 millis
 Job has been submitted with JobID 97b20a0a8ffd5c1d656328b0cd6436a6
 ```
 
-See how the command is equal to the [initial run command](#submitting-a-job) 
except for the 
-`--fromSavepoint` parameter which is used to refer to the state of the 
-[previously stopped 
job](#stopping-a-job-gracefully-creating-a-final-savepoint). A new JobID is 
-generated that can be used to maintain the job.
+请注意,该命令除了使用 `-fromSavepoint` 
参数关联[之前停止作业](#stopping-a-job-gracefully-creating-a-final-savepoint)的状态外,其它参数都与[初始
 run 命令](#submitting-a-job)相同。该操作会生成一个新的 JobID,用于维护作业的运行。
+
 
-By default, we try to match the whole savepoint state to the job being 
submitted. If you want to 
-allow to skip savepoint state that cannot be restored with the new job you can 
set the 
-`--allowNonRestoredState` flag. You need to allow this if you removed an 
operator from your program 
-that was part of the program when the savepoint was triggered and you still 
want to use the savepoint.
+默认情况下,Flink 尝试将新提交的作业恢复到完整的 savepoint 状态。如果你想忽略不能随新作业恢复的 savepoint 状态,可以设置 
`--allowNonRestoredState` 标志。当你删除了程序的某个操作,同时该操作是创建 savepoint 
时对应程序的一部分,这种情况下,如果你仍想使用 savepoint,就需要设置此参数。
 
 ```bash
 $ ./bin/flink run \
       --fromSavepoint <savepointPath> \
       --allowNonRestoredState ...
 ```
-This is useful if your program dropped an operator that was part of the 
savepoint.
+如果你的程序删除了相应 savepoint 的部分运算操作,使用该选项将很有帮助。
 
 {{< top >}}
 
-## CLI Actions
+<a name="cli-actions"> </a>
+
+## CLI 操作
+
+以下是 Flink CLI 工具支持操作的概览:
 
-Here's an overview of actions supported by Flink's CLI tool:
 <table class="table table-bordered">
     <thead>
         <tr>
-          <th class="text-left" style="width: 25%">Action</th>
-          <th class="text-left" style="width: 50%">Purpose</th>
+          <th class="text-left" style="width: 25%">操作</th>
+          <th class="text-left" style="width: 50%">目的</th>
         </tr>
     </thead>
     <tbody>
         <tr>
             <td><code class="highlighter-rouge">run</code></td>
             <td>
-                This action executes jobs. It requires at least the jar 
containing the job. Flink-
-                or job-related arguments can be passed if necessary.
+                该操作用于执行作业。必须指定包含作业的 jar 包。如有必要,可以传递与 Flink 或作业相关的参数。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">run-application</code></td>
             <td>
-                This action executes jobs in <a href="{{< ref 
"docs/deployment/overview" >}}#application-mode">
-                Application Mode</a>. Other than that, it requires the same 
parameters as the 
-                <code class="highlighter-rouge">run</code> action.
+                该操作用于在 <a href="{{< ref "docs/deployment/overview" 
>}}#application-mode">Application 模式</a>下执行作业。除此之外,它与 <code 
class="highlighter-rouge">run</code> 操作的参数相同。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">info</code></td>
             <td>
-                This action can be used to print an optimized execution graph 
of the passed job. Again,
-                the jar containing the job needs to be passed.
+                该操作用于打印作业相关的优化执行图。同样需要指定包含作业的 jar。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">list</code></td>
             <td>
-                This action lists all running or scheduled jobs.
+                该操作用于列出所有正在运行或调度中的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">savepoint</code></td>
             <td>
-                This action can be used to create or disposing savepoints for 
a given job. It might be
-                necessary to specify a savepoint directory besides the JobID, 
if the 
-                <a href="{{< ref "docs/deployment/config" 
>}}#state-savepoints-dir">state.savepoints.dir</a> 
-                parameter was not specified in <code 
class="highlighter-rouge">conf/flink-config.yaml</code>.
+                该操作用于为指定的作业创建或废弃 savepoint。如果在 <code 
class="highlighter-rouge">conf/flink-config.yaml</code> 中没有指定 <a href="{{< ref 
"docs/deployment/config" >}}#state-savepoints-dir">state.savepoints.dir</a> 
参数,那么除了指定 JobID 之外还需要指定 savepoint 目录。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">cancel</code></td>
             <td>
-                This action can be used to cancel running jobs based on their 
JobID.
+                该操作用于根据作业 JobID 取消正在运行的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">stop</code></td>
             <td>
-                This action combines the <code 
class="highlighter-rouge">cancel</code> and 
-                <code class="highlighter-rouge">savepoint</code> actions to 
stop a running job 
-                but also create a savepoint to start from again.
+                该操作结合了 <code class="highlighter-rouge">cancel</code> 和 <code 
class="highlighter-rouge">savepoint</code> 的功能,停止运行作业的同时会创建用于恢复作业的 savepoint 。
             </td>
         </tr>
     </tbody>
 </table>
 
-A more fine-grained description of all actions and their parameters can be 
accessed through `bin/flink --help` 
-or the usage information of each individual action `bin/flink <action> --help`.
+
+可以通过 `bin/flink --help` 查看所有支持的操作以及操作相关参数的详细信息,也可以通过 `bin/flink <action> 
--help` 单独查看指定操作的使用信息。
 
 {{< top >}}
 
-## Advanced CLI
- 
+<a name="advanced-cli"> </a>
+
+## 高级的 CLI
+
+<a name="rest-api"> </a>
+
 ### REST API
 
-The Flink cluster can be also managed using the [REST API]({{< ref 
"docs/ops/rest_api" >}}). The commands 
-described in previous sections are a subset of what is offered by Flink's REST 
endpoints. Therefore, 
-tools like `curl` can be used to get even more out of Flink.
+Flink 集群也可以使用 [REST API]({{< ref "docs/ops/rest_api" >}}) 进行管理。前面章节描述的命令是 
Flink  REST 服务端支持命令的子集。
+
+因此,可以使用 `curl`  之类的工具来进一步发挥 Flink 的作用。
+
+<a name="selecting-deployment-targets"> </a>
+
+### 选择部署方式
 
-### Selecting Deployment Targets
+Flink 兼容多种集群管理框架,例如 [Kubernetes]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 和 [YARN]({{< ref 
"docs/deployment/resource-providers/yarn" >}}),在 Resource Provider 
章节有更详细的描述。可以在不同的 [Deployment Modes]({{< ref "docs/deployment/overview" 
>}}#deployment-modes) 下提交作业。作业提交相关的参数化因底层框架和部署模式的不同而不同。
 
-Flink is compatible with multiple cluster management frameworks like 
-[Kubernetes]({{< ref "docs/deployment/resource-providers/native_kubernetes" 
>}}) or 
-[YARN]({{< ref "docs/deployment/resource-providers/yarn" >}}) which are 
described in more detail in the 
-Resource Provider section. Jobs can be submitted in different [Deployment 
Modes]({{< ref "docs/deployment/overview" >}}#deployment-modes). 
-The parameterization of a job submission differs based on the underlying 
framework and Deployment Mode. 
+`bin/flink` 提供了`--target` 参数来设置不同的选项。除此之外,仍然必须使用  `run`(针对 [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 和 [Per-Job Mode]({{< ref 
"docs/deployment/overview" >}}#per-job-mode))或 `run-application` (针对 
[Application Mode]({{< ref "docs/deployment/overview" 
>}}#application-mode))提交作业。
+
+下面的参数组合的总结:
 
-`bin/flink` offers a parameter `--target` to handle the different options. In 
addition to that, jobs 
-have to be submitted using either `run` (for [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 
-and [Per-Job Mode]({{< ref "docs/deployment/overview" >}}#per-job-mode)) or 
`run-application` (for 
-[Application Mode]({{< ref "docs/deployment/overview" >}}#application-mode)). 
See the following summary of 
-parameter combinations: 
 * YARN
-  * `./bin/flink run --target yarn-session`: Submission to an already running 
Flink on YARN cluster
-  * `./bin/flink run --target yarn-per-job`: Submission spinning up a Flink on 
YARN cluster in Per-Job Mode
-  * `./bin/flink run-application --target yarn-application`: Submission 
spinning up Flink on YARN cluster in Application Mode
+  * `./bin/flink run --target yarn-session`: 将作业以 `Session` 模式提交到 YARN 集群上运行的 
Flink。
+  * `./bin/flink run --target yarn-per-job`: 将作业以 `Per-Job` 模式提交到 Flink,会基于 
YARN 集群新启动一个对应 Flink。
+  * `./bin/flink run-application --target yarn-application`: 将作业以 
`yarn-application` 模式提交到 Flink,会基于 YARN 集群新启动一个对应 Flink。
 * Kubernetes
-  * `./bin/flink run --target kubernetes-session`: Submission to an already 
running Flink on Kubernetes cluster
-  * `./bin/flink run-application --target kubernetes-application`: Submission 
spinning up a Flink on Kubernetes cluster in Application Mode
+  * `./bin/flink run --target kubernetes-session`: 将作业以 `Session` 模式提交 
Kubernetes 集群上运行的 Flink。

Review comment:
       将作业以 `Session` 模式提交 Kubernetes 集群上运行的 Flink。
   “的 Flink” 去掉是不是好点

##########
File path: docs/content.zh/docs/deployment/cli.md
##########
@@ -226,191 +220,179 @@ Using standalone source with error rate 0.000000 and 
sleep delay 1 millis
 Job has been submitted with JobID 97b20a0a8ffd5c1d656328b0cd6436a6
 ```
 
-See how the command is equal to the [initial run command](#submitting-a-job) 
except for the 
-`--fromSavepoint` parameter which is used to refer to the state of the 
-[previously stopped 
job](#stopping-a-job-gracefully-creating-a-final-savepoint). A new JobID is 
-generated that can be used to maintain the job.
+请注意,该命令除了使用 `-fromSavepoint` 
参数关联[之前停止作业](#stopping-a-job-gracefully-creating-a-final-savepoint)的状态外,其它参数都与[初始
 run 命令](#submitting-a-job)相同。该操作会生成一个新的 JobID,用于维护作业的运行。
+
 
-By default, we try to match the whole savepoint state to the job being 
submitted. If you want to 
-allow to skip savepoint state that cannot be restored with the new job you can 
set the 
-`--allowNonRestoredState` flag. You need to allow this if you removed an 
operator from your program 
-that was part of the program when the savepoint was triggered and you still 
want to use the savepoint.
+默认情况下,Flink 尝试将新提交的作业恢复到完整的 savepoint 状态。如果你想忽略不能随新作业恢复的 savepoint 状态,可以设置 
`--allowNonRestoredState` 标志。当你删除了程序的某个操作,同时该操作是创建 savepoint 
时对应程序的一部分,这种情况下,如果你仍想使用 savepoint,就需要设置此参数。
 
 ```bash
 $ ./bin/flink run \
       --fromSavepoint <savepointPath> \
       --allowNonRestoredState ...
 ```
-This is useful if your program dropped an operator that was part of the 
savepoint.
+如果你的程序删除了相应 savepoint 的部分运算操作,使用该选项将很有帮助。
 
 {{< top >}}
 
-## CLI Actions
+<a name="cli-actions"> </a>
+
+## CLI 操作
+
+以下是 Flink CLI 工具支持操作的概览:
 
-Here's an overview of actions supported by Flink's CLI tool:
 <table class="table table-bordered">
     <thead>
         <tr>
-          <th class="text-left" style="width: 25%">Action</th>
-          <th class="text-left" style="width: 50%">Purpose</th>
+          <th class="text-left" style="width: 25%">操作</th>
+          <th class="text-left" style="width: 50%">目的</th>
         </tr>
     </thead>
     <tbody>
         <tr>
             <td><code class="highlighter-rouge">run</code></td>
             <td>
-                This action executes jobs. It requires at least the jar 
containing the job. Flink-
-                or job-related arguments can be passed if necessary.
+                该操作用于执行作业。必须指定包含作业的 jar 包。如有必要,可以传递与 Flink 或作业相关的参数。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">run-application</code></td>
             <td>
-                This action executes jobs in <a href="{{< ref 
"docs/deployment/overview" >}}#application-mode">
-                Application Mode</a>. Other than that, it requires the same 
parameters as the 
-                <code class="highlighter-rouge">run</code> action.
+                该操作用于在 <a href="{{< ref "docs/deployment/overview" 
>}}#application-mode">Application 模式</a>下执行作业。除此之外,它与 <code 
class="highlighter-rouge">run</code> 操作的参数相同。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">info</code></td>
             <td>
-                This action can be used to print an optimized execution graph 
of the passed job. Again,
-                the jar containing the job needs to be passed.
+                该操作用于打印作业相关的优化执行图。同样需要指定包含作业的 jar。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">list</code></td>
             <td>
-                This action lists all running or scheduled jobs.
+                该操作用于列出所有正在运行或调度中的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">savepoint</code></td>
             <td>
-                This action can be used to create or disposing savepoints for 
a given job. It might be
-                necessary to specify a savepoint directory besides the JobID, 
if the 
-                <a href="{{< ref "docs/deployment/config" 
>}}#state-savepoints-dir">state.savepoints.dir</a> 
-                parameter was not specified in <code 
class="highlighter-rouge">conf/flink-config.yaml</code>.
+                该操作用于为指定的作业创建或废弃 savepoint。如果在 <code 
class="highlighter-rouge">conf/flink-config.yaml</code> 中没有指定 <a href="{{< ref 
"docs/deployment/config" >}}#state-savepoints-dir">state.savepoints.dir</a> 
参数,那么除了指定 JobID 之外还需要指定 savepoint 目录。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">cancel</code></td>
             <td>
-                This action can be used to cancel running jobs based on their 
JobID.
+                该操作用于根据作业 JobID 取消正在运行的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">stop</code></td>
             <td>
-                This action combines the <code 
class="highlighter-rouge">cancel</code> and 
-                <code class="highlighter-rouge">savepoint</code> actions to 
stop a running job 
-                but also create a savepoint to start from again.
+                该操作结合了 <code class="highlighter-rouge">cancel</code> 和 <code 
class="highlighter-rouge">savepoint</code> 的功能,停止运行作业的同时会创建用于恢复作业的 savepoint 。
             </td>
         </tr>
     </tbody>
 </table>
 
-A more fine-grained description of all actions and their parameters can be 
accessed through `bin/flink --help` 
-or the usage information of each individual action `bin/flink <action> --help`.
+
+可以通过 `bin/flink --help` 查看所有支持的操作以及操作相关参数的详细信息,也可以通过 `bin/flink <action> 
--help` 单独查看指定操作的使用信息。
 
 {{< top >}}
 
-## Advanced CLI
- 
+<a name="advanced-cli"> </a>
+
+## 高级的 CLI
+
+<a name="rest-api"> </a>
+
 ### REST API
 
-The Flink cluster can be also managed using the [REST API]({{< ref 
"docs/ops/rest_api" >}}). The commands 
-described in previous sections are a subset of what is offered by Flink's REST 
endpoints. Therefore, 
-tools like `curl` can be used to get even more out of Flink.
+Flink 集群也可以使用 [REST API]({{< ref "docs/ops/rest_api" >}}) 进行管理。前面章节描述的命令是 
Flink  REST 服务端支持命令的子集。
+
+因此,可以使用 `curl`  之类的工具来进一步发挥 Flink 的作用。
+
+<a name="selecting-deployment-targets"> </a>
+
+### 选择部署方式
 
-### Selecting Deployment Targets
+Flink 兼容多种集群管理框架,例如 [Kubernetes]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 和 [YARN]({{< ref 
"docs/deployment/resource-providers/yarn" >}}),在 Resource Provider 
章节有更详细的描述。可以在不同的 [Deployment Modes]({{< ref "docs/deployment/overview" 
>}}#deployment-modes) 下提交作业。作业提交相关的参数化因底层框架和部署模式的不同而不同。
 
-Flink is compatible with multiple cluster management frameworks like 
-[Kubernetes]({{< ref "docs/deployment/resource-providers/native_kubernetes" 
>}}) or 
-[YARN]({{< ref "docs/deployment/resource-providers/yarn" >}}) which are 
described in more detail in the 
-Resource Provider section. Jobs can be submitted in different [Deployment 
Modes]({{< ref "docs/deployment/overview" >}}#deployment-modes). 
-The parameterization of a job submission differs based on the underlying 
framework and Deployment Mode. 
+`bin/flink` 提供了`--target` 参数来设置不同的选项。除此之外,仍然必须使用  `run`(针对 [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 和 [Per-Job Mode]({{< ref 
"docs/deployment/overview" >}}#per-job-mode))或 `run-application` (针对 
[Application Mode]({{< ref "docs/deployment/overview" 
>}}#application-mode))提交作业。
+
+下面的参数组合的总结:
 
-`bin/flink` offers a parameter `--target` to handle the different options. In 
addition to that, jobs 
-have to be submitted using either `run` (for [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 
-and [Per-Job Mode]({{< ref "docs/deployment/overview" >}}#per-job-mode)) or 
`run-application` (for 
-[Application Mode]({{< ref "docs/deployment/overview" >}}#application-mode)). 
See the following summary of 
-parameter combinations: 
 * YARN
-  * `./bin/flink run --target yarn-session`: Submission to an already running 
Flink on YARN cluster
-  * `./bin/flink run --target yarn-per-job`: Submission spinning up a Flink on 
YARN cluster in Per-Job Mode
-  * `./bin/flink run-application --target yarn-application`: Submission 
spinning up Flink on YARN cluster in Application Mode
+  * `./bin/flink run --target yarn-session`: 将作业以 `Session` 模式提交到 YARN 集群上运行的 
Flink。

Review comment:
       将作业以 `Session` 模式提交到 YARN 集群上运行“的” Flink。
   去掉“的” 读起来比较顺畅点

##########
File path: docs/content.zh/docs/deployment/cli.md
##########
@@ -226,191 +220,179 @@ Using standalone source with error rate 0.000000 and 
sleep delay 1 millis
 Job has been submitted with JobID 97b20a0a8ffd5c1d656328b0cd6436a6
 ```
 
-See how the command is equal to the [initial run command](#submitting-a-job) 
except for the 
-`--fromSavepoint` parameter which is used to refer to the state of the 
-[previously stopped 
job](#stopping-a-job-gracefully-creating-a-final-savepoint). A new JobID is 
-generated that can be used to maintain the job.
+请注意,该命令除了使用 `-fromSavepoint` 
参数关联[之前停止作业](#stopping-a-job-gracefully-creating-a-final-savepoint)的状态外,其它参数都与[初始
 run 命令](#submitting-a-job)相同。该操作会生成一个新的 JobID,用于维护作业的运行。
+
 
-By default, we try to match the whole savepoint state to the job being 
submitted. If you want to 
-allow to skip savepoint state that cannot be restored with the new job you can 
set the 
-`--allowNonRestoredState` flag. You need to allow this if you removed an 
operator from your program 
-that was part of the program when the savepoint was triggered and you still 
want to use the savepoint.
+默认情况下,Flink 尝试将新提交的作业恢复到完整的 savepoint 状态。如果你想忽略不能随新作业恢复的 savepoint 状态,可以设置 
`--allowNonRestoredState` 标志。当你删除了程序的某个操作,同时该操作是创建 savepoint 
时对应程序的一部分,这种情况下,如果你仍想使用 savepoint,就需要设置此参数。
 
 ```bash
 $ ./bin/flink run \
       --fromSavepoint <savepointPath> \
       --allowNonRestoredState ...
 ```
-This is useful if your program dropped an operator that was part of the 
savepoint.
+如果你的程序删除了相应 savepoint 的部分运算操作,使用该选项将很有帮助。
 
 {{< top >}}
 
-## CLI Actions
+<a name="cli-actions"> </a>
+
+## CLI 操作
+
+以下是 Flink CLI 工具支持操作的概览:
 
-Here's an overview of actions supported by Flink's CLI tool:
 <table class="table table-bordered">
     <thead>
         <tr>
-          <th class="text-left" style="width: 25%">Action</th>
-          <th class="text-left" style="width: 50%">Purpose</th>
+          <th class="text-left" style="width: 25%">操作</th>
+          <th class="text-left" style="width: 50%">目的</th>
         </tr>
     </thead>
     <tbody>
         <tr>
             <td><code class="highlighter-rouge">run</code></td>
             <td>
-                This action executes jobs. It requires at least the jar 
containing the job. Flink-
-                or job-related arguments can be passed if necessary.
+                该操作用于执行作业。必须指定包含作业的 jar 包。如有必要,可以传递与 Flink 或作业相关的参数。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">run-application</code></td>
             <td>
-                This action executes jobs in <a href="{{< ref 
"docs/deployment/overview" >}}#application-mode">
-                Application Mode</a>. Other than that, it requires the same 
parameters as the 
-                <code class="highlighter-rouge">run</code> action.
+                该操作用于在 <a href="{{< ref "docs/deployment/overview" 
>}}#application-mode">Application 模式</a>下执行作业。除此之外,它与 <code 
class="highlighter-rouge">run</code> 操作的参数相同。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">info</code></td>
             <td>
-                This action can be used to print an optimized execution graph 
of the passed job. Again,
-                the jar containing the job needs to be passed.
+                该操作用于打印作业相关的优化执行图。同样需要指定包含作业的 jar。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">list</code></td>
             <td>
-                This action lists all running or scheduled jobs.
+                该操作用于列出所有正在运行或调度中的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">savepoint</code></td>
             <td>
-                This action can be used to create or disposing savepoints for 
a given job. It might be
-                necessary to specify a savepoint directory besides the JobID, 
if the 
-                <a href="{{< ref "docs/deployment/config" 
>}}#state-savepoints-dir">state.savepoints.dir</a> 
-                parameter was not specified in <code 
class="highlighter-rouge">conf/flink-config.yaml</code>.
+                该操作用于为指定的作业创建或废弃 savepoint。如果在 <code 
class="highlighter-rouge">conf/flink-config.yaml</code> 中没有指定 <a href="{{< ref 
"docs/deployment/config" >}}#state-savepoints-dir">state.savepoints.dir</a> 
参数,那么除了指定 JobID 之外还需要指定 savepoint 目录。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">cancel</code></td>
             <td>
-                This action can be used to cancel running jobs based on their 
JobID.
+                该操作用于根据作业 JobID 取消正在运行的作业。
             </td>
         </tr>
         <tr>
             <td><code class="highlighter-rouge">stop</code></td>
             <td>
-                This action combines the <code 
class="highlighter-rouge">cancel</code> and 
-                <code class="highlighter-rouge">savepoint</code> actions to 
stop a running job 
-                but also create a savepoint to start from again.
+                该操作结合了 <code class="highlighter-rouge">cancel</code> 和 <code 
class="highlighter-rouge">savepoint</code> 的功能,停止运行作业的同时会创建用于恢复作业的 savepoint 。
             </td>
         </tr>
     </tbody>
 </table>
 
-A more fine-grained description of all actions and their parameters can be 
accessed through `bin/flink --help` 
-or the usage information of each individual action `bin/flink <action> --help`.
+
+可以通过 `bin/flink --help` 查看所有支持的操作以及操作相关参数的详细信息,也可以通过 `bin/flink <action> 
--help` 单独查看指定操作的使用信息。
 
 {{< top >}}
 
-## Advanced CLI
- 
+<a name="advanced-cli"> </a>
+
+## 高级的 CLI
+
+<a name="rest-api"> </a>
+
 ### REST API
 
-The Flink cluster can be also managed using the [REST API]({{< ref 
"docs/ops/rest_api" >}}). The commands 
-described in previous sections are a subset of what is offered by Flink's REST 
endpoints. Therefore, 
-tools like `curl` can be used to get even more out of Flink.
+Flink 集群也可以使用 [REST API]({{< ref "docs/ops/rest_api" >}}) 进行管理。前面章节描述的命令是 
Flink  REST 服务端支持命令的子集。
+
+因此,可以使用 `curl`  之类的工具来进一步发挥 Flink 的作用。
+
+<a name="selecting-deployment-targets"> </a>
+
+### 选择部署方式
 
-### Selecting Deployment Targets
+Flink 兼容多种集群管理框架,例如 [Kubernetes]({{< ref 
"docs/deployment/resource-providers/native_kubernetes" >}}) 和 [YARN]({{< ref 
"docs/deployment/resource-providers/yarn" >}}),在 Resource Provider 
章节有更详细的描述。可以在不同的 [Deployment Modes]({{< ref "docs/deployment/overview" 
>}}#deployment-modes) 下提交作业。作业提交相关的参数化因底层框架和部署模式的不同而不同。
 
-Flink is compatible with multiple cluster management frameworks like 
-[Kubernetes]({{< ref "docs/deployment/resource-providers/native_kubernetes" 
>}}) or 
-[YARN]({{< ref "docs/deployment/resource-providers/yarn" >}}) which are 
described in more detail in the 
-Resource Provider section. Jobs can be submitted in different [Deployment 
Modes]({{< ref "docs/deployment/overview" >}}#deployment-modes). 
-The parameterization of a job submission differs based on the underlying 
framework and Deployment Mode. 
+`bin/flink` 提供了`--target` 参数来设置不同的选项。除此之外,仍然必须使用  `run`(针对 [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 和 [Per-Job Mode]({{< ref 
"docs/deployment/overview" >}}#per-job-mode))或 `run-application` (针对 
[Application Mode]({{< ref "docs/deployment/overview" 
>}}#application-mode))提交作业。
+
+下面的参数组合的总结:
 
-`bin/flink` offers a parameter `--target` to handle the different options. In 
addition to that, jobs 
-have to be submitted using either `run` (for [Session]({{< ref 
"docs/deployment/overview" >}}#session-mode) 
-and [Per-Job Mode]({{< ref "docs/deployment/overview" >}}#per-job-mode)) or 
`run-application` (for 
-[Application Mode]({{< ref "docs/deployment/overview" >}}#application-mode)). 
See the following summary of 
-parameter combinations: 
 * YARN
-  * `./bin/flink run --target yarn-session`: Submission to an already running 
Flink on YARN cluster
-  * `./bin/flink run --target yarn-per-job`: Submission spinning up a Flink on 
YARN cluster in Per-Job Mode
-  * `./bin/flink run-application --target yarn-application`: Submission 
spinning up Flink on YARN cluster in Application Mode
+  * `./bin/flink run --target yarn-session`: 将作业以 `Session` 模式提交到 YARN 集群上运行的 
Flink。
+  * `./bin/flink run --target yarn-per-job`: 将作业以 `Per-Job` 模式提交到 Flink,会基于 
YARN 集群新启动一个对应 Flink。
+  * `./bin/flink run-application --target yarn-application`: 将作业以 
`yarn-application` 模式提交到 Flink,会基于 YARN 集群新启动一个对应 Flink。
 * Kubernetes
-  * `./bin/flink run --target kubernetes-session`: Submission to an already 
running Flink on Kubernetes cluster
-  * `./bin/flink run-application --target kubernetes-application`: Submission 
spinning up a Flink on Kubernetes cluster in Application Mode
+  * `./bin/flink run --target kubernetes-session`: 将作业以 `Session` 模式提交 
Kubernetes 集群上运行的 Flink。
+  * `./bin/flink run-application --target kubernetes-application`: 将作业以 
`yarn-application` 模式提交到 Flink,会基于 Kubernetes 集群新启动一个对应 Flink 。

Review comment:
       “将作业以 `yarn-application` 模式提交,会基于 Kubernetes 集群新启动一个对应 Flink 。”
   修改为
   “将作业以 `yarn-application` 模式提交到 Flink,会基于 Kubernetes 集群新启动一个对应 Flink 。”
   是不是好点




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to