klion26 commented on a change in pull request #8300: [FLINK-11638][docs-zh] 
Translate Savepoints page into Chinese
URL: https://github.com/apache/flink/pull/8300#discussion_r300007693
 
 

 ##########
 File path: docs/ops/state/savepoints.zh.md
 ##########
 @@ -78,160 +68,162 @@ source-id   | State of StatefulSource
 mapper-id   | State of StatefulMapper
 {% endhighlight %}
 
-In the above example, the print sink is stateless and hence not part of the 
savepoint state. By default, we try to map each entry of the savepoint back to 
the new program.
+在上面的示例中,print sink 是无状态的,因此不是 Savepoint 状态的一部分。默认情况下,我们尝试将 Savepoint 
的每个条目映射回新程序。
 
-## Operations
+## 算子
 
-You can use the [command line client]({{ site.baseurl 
}}/ops/cli.html#savepoints) to *trigger savepoints*, *cancel a job with a 
savepoint*, *resume from savepoints*, and *dispose savepoints*.
+你可以使用[命令行客户端]({{site.baseurl}}/zh/ops/cli.html#Savepoint)来*触发 Savepoint*,*触发 
Savepoint 并取消作业*,*从 Savepoint*恢复,以及*删除 Savepoint*。
 
-With Flink >= 1.2.0 it is also possible to *resume from savepoints* using the 
webui.
+从 Flink 1.2.0 开始,还可以使用 webui *从 Savepoint 恢复*。
 
-### Triggering Savepoints
+### 触发 Savepoint
 
-When triggering a savepoint, a new savepoint directory is created where the 
data as well as the meta data will be stored. The location of this directory 
can be controlled by [configuring a default target directory](#configuration) 
or by specifying a custom target directory with the trigger commands (see the 
[`:targetDirectory` argument](#trigger-a-savepoint)).
+当触发 Savepoint 时,将创建一个新的 Savepoint 
目录,其中存储数据和元数据。可以通过[配置默认目标目录](#configuration)或使用触发器命令指定自定义目标目录(参见[`:targetDirectory
 `参数](#trigger-a-savepoint)来控制该目录的位置。
 
 <div class="alert alert-warning">
-<strong>Attention:</strong> The target directory has to be a location 
accessible by both the JobManager(s) and TaskManager(s) e.g. a location on a 
distributed file-system.
+<strong>注意:</strong>目标目录必须是 JobManager(s) 和 TaskManager(s) 
都可以访问的位置,例如分布式文件系统上的位置。
 </div>
 
-For example with a `FsStateBackend` or `RocksDBStateBackend`:
+例如,使用 `FsStateBackend`  或 `RocksDBStateBackend` :
 
 {% highlight shell %}
-# Savepoint target directory
-/savepoints/
+# Savepoint 目标目录
+/Savepoint/
 
-# Savepoint directory
-/savepoints/savepoint-:shortjobid-:savepointid/
+# Savepoint 目录
+/Savepoint/savepoint-:shortjobid-:savepointid/
 
-# Savepoint file contains the checkpoint meta data
-/savepoints/savepoint-:shortjobid-:savepointid/_metadata
+# Savepoint 文件包含 Checkpoint元数据
+/Savepoint/savepoint-:shortjobid-:savepointid/_metadata
 
-# Savepoint state
-/savepoints/savepoint-:shortjobid-:savepointid/...
+# Savepoint 状态
+/Savepoint/savepoint-:shortjobid-:savepointid/...
 {% endhighlight %}
 
 <div class="alert alert-info">
-  <strong>Note:</strong>
-Although it looks as if the savepoints may be moved, it is currently not 
possible due to absolute paths in the <code>_metadata</code> file.
-Please follow <a 
href="https://issues.apache.org/jira/browse/FLINK-5778";>FLINK-5778</a> for 
progress on lifting this restriction.
+  <strong>注意:</strong>
+虽然看起来好像可以移动 Savepoint ,但由于<code> _metadata </ code>中保存的是绝对路径,因此暂时不支持。
+请按照<a href="https://issues.apache.org/jira/browse/FLINK-5778";> FLINK-5778 
</a>了解取消此限制的进度。
 </div>
-
-Note that if you use the `MemoryStateBackend`, metadata *and* savepoint state 
will be stored in the `_metadata` file. Since it is self-contained, you may 
move the file and restore from any location.
+请注意,如果使用 `MemoryStateBackend`,则元数据*和*  Savepoint 状态将存储在`_metadata`文件中。 
由于它是自包含的,你可以移动文件并从任何位置恢复。
 
 <div class="alert alert-warning">
-  <strong>Attention:</strong> It is discouraged to move or delete the last 
savepoint of a running job, because this might interfere with failure-recovery. 
Savepoints have side-effects on exactly-once sinks, therefore 
-  to ensure exactly-once semantics, if there is no checkpoint after the last 
savepoint, the savepoint will be used for recovery. 
+  <strong>注意:</strong> 不建议移动或删除正在运行作业的最后一个 Savepoint 
,因为这可能会干扰故障恢复。因此,Savepoint 对精确一次的接收器有副作用,为了确保精确一次的语义,如果在最后一个 Savepoint 之后没有 
Checkpoint ,那么将使用 Savepoint 进行恢复。
 </div>
 
-#### Trigger a Savepoint
+
+#### 触发 Savepoint
 
 {% highlight shell %}
 $ bin/flink savepoint :jobId [:targetDirectory]
 {% endhighlight %}
 
-This will trigger a savepoint for the job with ID `:jobId`, and returns the 
path of the created savepoint. You need this path to restore and dispose 
savepoints.
+这将触发ID为`:jobId`的作业的Savepoint,并返回创建的 Savepoint 的路径。 你需要此路径来还原和部署 Savepoint 。
 
-#### Trigger a Savepoint with YARN
+#### 使用 YARN 触发 Savepoint
 
 {% highlight shell %}
 $ bin/flink savepoint :jobId [:targetDirectory] -yid :yarnAppId
 {% endhighlight %}
 
-This will trigger a savepoint for the job with ID `:jobId` and YARN 
application ID `:yarnAppId`, and returns the path of the created savepoint.
+这将触发 ID 为`:jobId` 和 YARN 应用程序 ID `:yarnAppId`的作业的 Savepoint,并返回创建的 Savepoint 
的路径。
 
-#### Cancel Job with Savepoint
+#### 使用 Savepoint 取消作业
 
 {% highlight shell %}
 $ bin/flink cancel -s [:targetDirectory] :jobId
 {% endhighlight %}
 
-This will atomically trigger a savepoint for the job with ID `:jobid` and 
cancel the job. Furthermore, you can specify a target file system directory to 
store the savepoint in.  The directory needs to be accessible by the 
JobManager(s) and TaskManager(s).
+这将自动触发 ID 为`:jobid` 的作业的 Savepoint,并取消该作业。此外,你可以指定一个目标文件系统目录来存储 Savepoint 
。该目录需要被 JobManager(s) 和 TaskManager(s) 访问。
 
-### Resuming from Savepoints
+### 从 Savepoint 恢复
 
 {% highlight shell %}
 $ bin/flink run -s :savepointPath [:runArgs]
 {% endhighlight %}
 
-This submits a job and specifies a savepoint to resume from. You may give a 
path to either the savepoint's directory or the `_metadata` file.
+这将提交作业并指定要从中恢复的 Savepoint 。 你可以给出 Savepoint 目录或`_metadata`文件的路径。
 
-#### Allowing Non-Restored State
+#### 允许非恢复状态
 
-By default the resume operation will try to map all state of the savepoint 
back to the program you are restoring with. If you dropped an operator, you can 
allow to skip state that cannot be mapped to the new program via 
`--allowNonRestoredState` (short: `-n`) option:
+默认情况下,resume 操作将尝试将 Savepoint 的所有状态映射回你要还原的程序。 
如果删除了运算符,则可以通过`--allowNonRestoredState`(short:`-n`)选项跳过无法映射到新程序的状态:
 
 {% highlight shell %}
 $ bin/flink run -s :savepointPath -n [:runArgs]
 {% endhighlight %}
 
-### Disposing Savepoints
+### 删除 Savepoint
 
 {% highlight shell %}
 $ bin/flink savepoint -d :savepointPath
 {% endhighlight %}
 
-This disposes the savepoint stored in `:savepointPath`.
+这将删除存储在`:savepointPath`中的Savepoint。
+
+请注意,还可以通过常规文件系统操作手动删除 Savepoint ,而不会影响其他 Savepoint 或 Checkpoint(请记住,每个 
Savepoint 都是自包含的)。 在 Flink 1.2 之前,使用上面的 Savepoint 命令执行是一个更乏味的任务。
 
-Note that it is possible to also manually delete a savepoint via regular file 
system operations without affecting other savepoints or checkpoints (recall 
that each savepoint is self-contained). Up to Flink 1.2, this was a more 
tedious task which was performed with the savepoint command above.
+### 配置
 
-### Configuration
+你可以通过 `state.savepoint.dir` 配置 savepoint 的默认目录。 触发 savepoint 时,将使用此目录来存储 
savepoint 。 你可以通过使用触发器命令指定自定义目标目录来覆盖缺省值(请参阅[`:targetDirectory` 
参数](#trigger-a-savepoint))。
 
-You can configure a default savepoint target directory via the 
`state.savepoints.dir` key. When triggering savepoints, this directory will be 
used to store the savepoint. You can overwrite the default by specifying a 
custom target directory with the trigger commands (see the [`:targetDirectory` 
argument](#trigger-a-savepoint)).
 
 {% highlight yaml %}
-# Default savepoint target directory
-state.savepoints.dir: hdfs:///flink/savepoints
+# 默认 Savepoint 目标目录
+state.savepoint.dir: hdfs:///flink/savepoint
 {% endhighlight %}
 
-If you neither configure a default nor specify a custom target directory, 
triggering the savepoint will fail.
+如果既未配置缺省值也未指定自定义目标目录,则触发 Savepoint 将失败。
 
 <div class="alert alert-warning">
-<strong>Attention:</strong> The target directory has to be a location 
accessible by both the JobManager(s) and TaskManager(s) e.g. a location on a 
distributed file-system.
+<strong>注意:</strong>目标目录必须是 JobManager(s) 和 TaskManager(s) 
可访问的位置,例如,分布式文件系统上的位置。
 </div>
 
+
 ## F.A.Q
 
-### Should I assign IDs to all operators in my job?
+### 我应该为我作业中的所有算子分配 ID 吗?
+
+根据经验,是的。 严格来说,仅通过 `uid` 方法给有状态算子分配 ID 就足够了。Savepoint 仅包含这些有状态算子的状态,无状态算子不是 
Savepoint 的一部分。
+
+
+在实践中,建议给所有算子分配 ID,因为 Flink 的一些内置算子(如 Window 算子)也是有状态的,而内置算子是否有状态并不很明显。 
如果你完全确定算子是无状态的,则可以跳过 `uid` 方法。
 
-As a rule of thumb, yes. Strictly speaking, it is sufficient to only assign 
IDs via the `uid` method to the stateful operators in your job. The savepoint 
only contains state for these operators and stateless operator are not part of 
the savepoint.
 
-In practice, it is recommended to assign it to all operators, because some of 
Flink's built-in operators like the Window operator are also stateful and it is 
not obvious which built-in operators are actually stateful and which are not. 
If you are absolutely certain that an operator is stateless, you can skip the 
`uid` method.
+### 如果我在作业中添加一个需要状态的新算子,会发生什么?
 
-### What happens if I add a new operator that requires state to my job?
+当你向作业添加新算子时,它将在没有任何状态的情况下进行初始化。 Savepoint 包含每个有状态算子的状态。 无状态算子根本不是 Savepoint 
的一部分。 新算子的行为类似于无状态算子。
 
-When you add a new operator to your job it will be initialized without any 
state. Savepoints contain the state of each stateful operator. Stateless 
operators are simply not part of the savepoint. The new operator behaves 
similar to a stateless operator.
+### 如果从作业中删除有状态的算子会发生什么?
 
-### What happens if I delete an operator that has state from my job?
+默认情况下,从 Savepoint 恢复时将尝试将所有状态分配给新作业。如果有状态算子被删除,则无法从 Savepoint 恢复。
 
-By default, a savepoint restore will try to match all state back to the 
restored job. If you restore from a savepoint that contains state for an 
operator that has been deleted, this will therefore fail. 
 
-You can allow non restored state by setting the `--allowNonRestoredState` 
(short: `-n`) with the run command:
+你可以通过使用 run 命令设置 `——allowNonRestoredState` (简称:`-n` )来允许删除有状态算子:
 
 {% highlight shell %}
 $ bin/flink run -s :savepointPath -n [:runArgs]
 {% endhighlight %}
 
-### What happens if I reorder stateful operators in my job?
+### 如果我在作业中重新排序有状态算子,会发生什么?
 
-If you assigned IDs to these operators, they will be restored as usual.
+如果给这些算子分配了 ID,它们将像往常一样恢复。
 
-If you did not assign IDs, the auto generated IDs of the stateful operators 
will most likely change after the reordering. This would result in you not 
being able to restore from a previous savepoint.
+如果没有分配 ID ,则有状态操作符自动生成的 ID 很可能在重新排序后发生更改。这将导致你无法从以前的 Savepoint 恢复。
 
-### What happens if I add or delete or reorder operators that have no state in 
my job?
+### 如果我添加、删除或重新排序作业中没有状态的算子,会发生什么?
 
-If you assigned IDs to your stateful operators, the stateless operators will 
not influence the savepoint restore.
+如果将 ID 分配给有状态操作符,则无状态操作符不会影响 Savepoint 恢复。
 
-If you did not assign IDs, the auto generated IDs of the stateful operators 
will most likely change after the reordering. This would result in you not 
being able to restore from a previous savepoint.
+如果没有分配 ID ,则有状态操作符自动生成的 ID 很可能在重新排序后发生更改。这将导致你无法从以前的Savepoint 恢复。
 
-### What happens when I change the parallelism of my program when restoring?
+### 当我在恢复时改变程序的并行度时会发生什么?
 
-If the savepoint was triggered with Flink >= 1.2.0 and using no deprecated 
state API like `Checkpointed`, you can simply restore the program from a 
savepoint and specify a new parallelism.
+如果 Savepoint 是用 Flink >= 1.2.0触发的,并且没有使用像 `Checkpointed` 
这样的不推荐的状态API,那么你可以简单地从 Savepoint 恢复程序并指定新的并行性。
 
 Review comment:
   ```suggestion
   如果 Savepoint 是用 Flink >= 1.2.0 触发的,并且没有使用像 `Checkpointed` 
这样的不推荐的状态API,那么你可以简单地从 Savepoint 恢复程序并指定新的并行度。
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to