This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 69456c3e05d665dd2c2e1e1949c17908b3c5e74e
Author: Seth Wiesman <sjwies...@gmail.com>
AuthorDate: Mon Jun 8 14:03:11 2020 -0500

    [FLINK-17980][docs] Update broken links
---
 docs/concepts/index.md                           | 13 ++++++-------
 docs/concepts/index.zh.md                        |  9 ++++-----
 docs/dev/connectors/elasticsearch.md             |  2 +-
 docs/dev/connectors/elasticsearch.zh.md          |  2 +-
 docs/dev/stream/state/checkpointing.md           |  4 ++--
 docs/dev/stream/state/checkpointing.zh.md        |  4 ++--
 docs/index.md                                    |  6 +++---
 docs/index.zh.md                                 |  6 +++---
 docs/internals/task_lifecycle.md                 |  2 +-
 docs/internals/task_lifecycle.zh.md              |  2 +-
 docs/learn-flink/index.md                        |  2 +-
 docs/learn-flink/index.zh.md                     |  2 +-
 docs/ops/state/savepoints.md                     |  2 +-
 docs/ops/state/savepoints.zh.md                  |  2 +-
 docs/try-flink/flink-operations-playground.md    |  4 ++--
 docs/try-flink/flink-operations-playground.zh.md |  4 ++--
 16 files changed, 32 insertions(+), 34 deletions(-)

diff --git a/docs/concepts/index.md b/docs/concepts/index.md
index 2c44e1f..11beb8e 100644
--- a/docs/concepts/index.md
+++ b/docs/concepts/index.md
@@ -1,8 +1,8 @@
 ---
-title: Concepts in Depth
+title: Concepts
 nav-id: concepts
 nav-pos: 3
-nav-title: '<i class="fa fa-map-o title appetizer" aria-hidden="true"></i> 
Concepts in Depth'
+nav-title: '<i class="fa fa-map-o title appetizer" aria-hidden="true"></i> 
Concepts'
 nav-parent_id: root
 nav-show_overview: true
 permalink: /concepts/index.html
@@ -26,13 +26,12 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The [Hands-on Training]({% link training/index.md %}) explains the basic 
concepts
+The [Hands-on Training]({% link learn-flink/index.md %}) explains the basic 
concepts
 of stateful and timely stream processing that underlie Flink's APIs, and 
provides examples of how
 these mechanisms are used in applications. Stateful stream processing is 
introduced in the context
-of [Data Pipelines & ETL]({% link training/etl.md %}#stateful-transformations)
-and is further developed in the section on [Fault Tolerance]({% link
-training/fault_tolerance.md %}). Timely stream processing is introduced in the 
section on
-[Streaming Analytics]({% link training/streaming_analytics.md %}).
+of [Data Pipelines & ETL]({% link learn-flink/etl.md 
%}#stateful-transformations)
+and is further developed in the section on [Fault Tolerance]({% link 
learn-flink/fault_tolerance.md %}). Timely stream processing is introduced in 
the section on
+[Streaming Analytics]({% link learn-flink/streaming_analytics.md %}).
 
 This _Concepts in Depth_ section provides a deeper understanding of how 
Flink's architecture and runtime 
 implement these concepts.
diff --git a/docs/concepts/index.zh.md b/docs/concepts/index.zh.md
index fab83f7..54f7dfb 100644
--- a/docs/concepts/index.zh.md
+++ b/docs/concepts/index.zh.md
@@ -26,13 +26,12 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The [Hands-on Training]({% link training/index.zh.md %}) explains the basic 
concepts
+The [Hands-on Training]({% link learn-flink/index.zh.md %}) explains the basic 
concepts
 of stateful and timely stream processing that underlie Flink's APIs, and 
provides examples of how
 these mechanisms are used in applications. Stateful stream processing is 
introduced in the context
-of [Data Pipelines & ETL]({% link training/etl.zh.md 
%}#stateful-transformations)
-and is further developed in the section on [Fault Tolerance]({% link
-training/fault_tolerance.zh.md %}). Timely stream processing is introduced in 
the section on
-[Streaming Analytics]({% link training/streaming_analytics.zh.md %}).
+of [Data Pipelines & ETL]({% link learn-flink/etl.zh.md 
%}#stateful-transformations)
+and is further developed in the section on [Fault Tolerance]({% link 
learn-flink/fault_tolerance.zh.md %}). Timely stream processing is introduced 
in the section on
+[Streaming Analytics]({% link learn-flink/streaming_analytics.zh.md %}).
 
 This _Concepts in Depth_ section provides a deeper understanding of how 
Flink's architecture and runtime 
 implement these concepts.
diff --git a/docs/dev/connectors/elasticsearch.md 
b/docs/dev/connectors/elasticsearch.md
index 4b8b2da..9fd808e 100644
--- a/docs/dev/connectors/elasticsearch.md
+++ b/docs/dev/connectors/elasticsearch.md
@@ -317,7 +317,7 @@ time of checkpoints. This effectively assures that all 
requests before the
 checkpoint was triggered have been successfully acknowledged by Elasticsearch, 
before
 proceeding to process more records sent to the sink.
 
-More details on checkpoints and fault tolerance are in the [fault tolerance 
docs]({{site.baseurl}}/training/fault_tolerance.html).
+More details on checkpoints and fault tolerance are in the [fault tolerance 
docs]({{site.baseurl}}/learn-flink/fault_tolerance.html).
 
 To use fault tolerant Elasticsearch Sinks, checkpointing of the topology needs 
to be enabled at the execution environment:
 
diff --git a/docs/dev/connectors/elasticsearch.zh.md 
b/docs/dev/connectors/elasticsearch.zh.md
index 640f4d6..e39603b 100644
--- a/docs/dev/connectors/elasticsearch.zh.md
+++ b/docs/dev/connectors/elasticsearch.zh.md
@@ -317,7 +317,7 @@ time of checkpoints. This effectively assures that all 
requests before the
 checkpoint was triggered have been successfully acknowledged by Elasticsearch, 
before
 proceeding to process more records sent to the sink.
 
-More details on checkpoints and fault tolerance are in the [fault tolerance 
docs]({{site.baseurl}}/zh/training/fault_tolerance.html).
+More details on checkpoints and fault tolerance are in the [fault tolerance 
docs]({{site.baseurl}}/zh/learn-flink/fault_tolerance.html).
 
 To use fault tolerant Elasticsearch Sinks, checkpointing of the topology needs 
to be enabled at the execution environment:
 
diff --git a/docs/dev/stream/state/checkpointing.md 
b/docs/dev/stream/state/checkpointing.md
index f5fef89..496b540 100644
--- a/docs/dev/stream/state/checkpointing.md
+++ b/docs/dev/stream/state/checkpointing.md
@@ -32,7 +32,7 @@ any type of more elaborate operation.
 In order to make state fault tolerant, Flink needs to **checkpoint** the 
state. Checkpoints allow Flink to recover state and positions
 in the streams to give the application the same semantics as a failure-free 
execution.
 
-The [documentation on streaming fault tolerance]({{ site.baseurl 
}}/training/fault_tolerance.html) describes in detail the technique behind 
Flink's streaming fault tolerance mechanism.
+The [documentation on streaming fault tolerance]({{ site.baseurl 
}}/learn-flink/fault_tolerance.html) describes in detail the technique behind 
Flink's streaming fault tolerance mechanism.
 
 
 ## Prerequisites
@@ -173,7 +173,7 @@ Some more parameters and/or defaults may be set via 
`conf/flink-conf.yaml` (see
 
 ## Selecting a State Backend
 
-Flink's [checkpointing mechanism]({{ site.baseurl 
}}/training/fault_tolerance.html) stores consistent snapshots
+Flink's [checkpointing mechanism]({{ site.baseurl 
}}/learn-flink/fault_tolerance.html) stores consistent snapshots
 of all the state in timers and stateful operators, including connectors, 
windows, and any [user-defined state](state.html).
 Where the checkpoints are stored (e.g., JobManager memory, file system, 
database) depends on the configured
 **State Backend**. 
diff --git a/docs/dev/stream/state/checkpointing.zh.md 
b/docs/dev/stream/state/checkpointing.zh.md
index 6f22d62..f9f0592 100644
--- a/docs/dev/stream/state/checkpointing.zh.md
+++ b/docs/dev/stream/state/checkpointing.zh.md
@@ -29,7 +29,7 @@ Flink 中的每个方法或算子都能够是**有状态的**(阅读 [working
 状态化的方法在处理单个 元素/事件 的时候存储数据,让状态成为使各个类型的算子更加精细的重要部分。
 为了让状态容错,Flink 需要为状态添加 **checkpoint(检查点)**。Checkpoint 使得 Flink 
能够恢复状态和在流中的位置,从而向应用提供和无故障执行时一样的语义。
 
-[容错文档]({{ site.baseurl }}/zh/training/fault_tolerance.html) 中介绍了 Flink 
流计算容错机制内部的技术原理。
+[容错文档]({{ site.baseurl }}/zh/learn-flink/fault_tolerance.html) 中介绍了 Flink 
流计算容错机制内部的技术原理。
 
 
 ## 前提条件
@@ -165,7 +165,7 @@ 
env.get_checkpoint_config().set_prefer_checkpoint_for_recovery(True)
 
 ## 选择一个 State Backend
 
-Flink 的 [checkpointing 机制]({{ site.baseurl 
}}/zh/training/fault_tolerance.html) 会将 timer 以及 stateful 的 operator 
进行快照,然后存储下来,
+Flink 的 [checkpointing 机制]({{ site.baseurl 
}}/zh/learn-flink/fault_tolerance.html) 会将 timer 以及 stateful 的 operator 
进行快照,然后存储下来,
 包括连接器(connectors),窗口(windows)以及任何用户[自定义的状态](state.html)。
 Checkpoint 存储在哪里取决于所配置的 **State Backend**(比如 JobManager memory、 file system、 
database)。
 
diff --git a/docs/index.md b/docs/index.md
index 09e7f7e..69eb302 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -30,11 +30,11 @@ Apache Flink is an open source platform for distributed 
stream and batch data pr
 ## First Steps
 
 * **Code Walkthroughs**: Follow step-by-step guides and implement a simple 
application or query in one of Flink's APIs. 
-  * [Implement a DataStream 
application](./getting-started/walkthroughs/datastream_api.html)
-  * [Write a Table API query](./getting-started/walkthroughs/table_api.html)
+  * [Implement a DataStream application]({% link try-flink/datastream_api.md 
%})
+  * [Write a Table API query]({% link try-flink/table_api.md %})
 
 * **Docker Playgrounds**: Set up a sandboxed Flink environment in just a few 
minutes to explore and play with Flink.
-  * [Run and manage Flink streaming 
applications](./getting-started/docker-playgrounds/flink-operations-playground.html)
+  * [Run and manage Flink streaming applications]({% 
try-flink/flink-operations-playground.md %})
 
 * **Concepts**: Learn about Flink's concepts to better understand the 
documentation.
   * [Stateful Stream Processing](concepts/stateful-stream-processing.html)
diff --git a/docs/index.zh.md b/docs/index.zh.md
index f2b1bc8..d21ca73 100644
--- a/docs/index.zh.md
+++ b/docs/index.zh.md
@@ -31,11 +31,11 @@ Apache Flink 是一个分布式流批一体化的开源平台。Flink 的核心
 ## 初步印象
 
 * **代码练习**: 跟随分步指南通过 Flink API 实现简单应用或查询。
-  * [实现 DataStream 应用](./getting-started/walkthroughs/datastream_api.html)
-  * [书写 Table API 查询](./getting-started/walkthroughs/table_api.html)
+  * [实现 DataStream 应用]({% link try-flink/datastream_api.zh.md %})
+  * [书写 Table API 查询]({% link try-flink/table_api.zh.md %})
 
 * **Docker 游乐场**: 你只需花几分钟搭建 Flink 沙盒环境,就可以探索和使用 Flink 了。
-  * [运行与管理 Flink 
流处理应用](./getting-started/docker-playgrounds/flink-operations-playground.html)
+  * [运行与管理 Flink 流处理应用]({% link try-flink/flink-operations-playground.zh.md %})
 
 * **概念**: 学习 Flink 的基本概念能更好地理解文档。
   * [有状态流处理](concepts/stateful-stream-processing.html)
diff --git a/docs/internals/task_lifecycle.md b/docs/internals/task_lifecycle.md
index 4d5c485..ee81780 100644
--- a/docs/internals/task_lifecycle.md
+++ b/docs/internals/task_lifecycle.md
@@ -92,7 +92,7 @@ operator is opened and before it is closed. The 
responsibility of this method is
 to the specified [state backend]({{ site.baseurl 
}}/ops/state/state_backends.html) from where it will be retrieved when 
 the job resumes execution after a failure. Below we include a brief 
description of Flink's checkpointing mechanism, 
 and for a more detailed discussion on the principles around checkpointing in 
Flink please read the corresponding documentation: 
-[Data Streaming Fault Tolerance]({{ site.baseurl 
}}/training/fault_tolerance.html).
+[Data Streaming Fault Tolerance]({{ site.baseurl 
}}/learn-flink/fault_tolerance.html).
 
 ## Task Lifecycle
 
diff --git a/docs/internals/task_lifecycle.zh.md 
b/docs/internals/task_lifecycle.zh.md
index bc5cccb..a06ec4e 100644
--- a/docs/internals/task_lifecycle.zh.md
+++ b/docs/internals/task_lifecycle.zh.md
@@ -92,7 +92,7 @@ operator is opened and before it is closed. The 
responsibility of this method is
 to the specified [state backend]({{ site.baseurl 
}}/ops/state/state_backends.html) from where it will be retrieved when 
 the job resumes execution after a failure. Below we include a brief 
description of Flink's checkpointing mechanism, 
 and for a more detailed discussion on the principles around checkpointing in 
Flink please read the corresponding documentation: 
-[Data Streaming Fault Tolerance]({{ site.baseurl 
}}/training/fault_tolerance.html).
+[Data Streaming Fault Tolerance]({{ site.baseurl 
}}/learn-flink/fault_tolerance.html).
 
 ## Task Lifecycle
 
diff --git a/docs/learn-flink/index.md b/docs/learn-flink/index.md
index f17bdfb..873c020 100644
--- a/docs/learn-flink/index.md
+++ b/docs/learn-flink/index.md
@@ -1,5 +1,5 @@
 ---
-title: Learn Flink
+title: "Learn Flink: Hands-on Training"
 nav-id: learn-flink
 nav-pos: 2
 nav-title: '<i class="fa fa-hand-paper-o title appetizer" 
aria-hidden="true"></i> Learn Flink'
diff --git a/docs/learn-flink/index.zh.md b/docs/learn-flink/index.zh.md
index 3707487..012ea0a 100644
--- a/docs/learn-flink/index.zh.md
+++ b/docs/learn-flink/index.zh.md
@@ -1,5 +1,5 @@
 ---
-title: Hands-on Training
+title: "Learn Flink: Hands-on Training"
 nav-id: training
 nav-pos: 2
 nav-title: '<i class="fa fa-hand-paper-o title appetizer" 
aria-hidden="true"></i> Hands-on Training'
diff --git a/docs/ops/state/savepoints.md b/docs/ops/state/savepoints.md
index d1e07f2..bc23450 100644
--- a/docs/ops/state/savepoints.md
+++ b/docs/ops/state/savepoints.md
@@ -27,7 +27,7 @@ under the License.
 
 ## What is a Savepoint? How is a Savepoint different from a Checkpoint?
 
-A Savepoint is a consistent image of the execution state of a streaming job, 
created via Flink's [checkpointing mechanism]({{ site.baseurl 
}}/training/fault_tolerance.html). You can use Savepoints to stop-and-resume, 
fork,
+A Savepoint is a consistent image of the execution state of a streaming job, 
created via Flink's [checkpointing mechanism]({{ site.baseurl 
}}/learn-flink/fault_tolerance.html). You can use Savepoints to 
stop-and-resume, fork,
 or update your Flink jobs. Savepoints consist of two parts: a directory with 
(typically large) binary files on stable storage (e.g. HDFS, S3, ...) and a 
(relatively small) meta data file. The files on stable storage represent the 
net data of the job's execution state
 image. The meta data file of a Savepoint contains (primarily) pointers to all 
files on stable storage that are part of the Savepoint, in form of absolute 
paths.
 
diff --git a/docs/ops/state/savepoints.zh.md b/docs/ops/state/savepoints.zh.md
index b8c52f7..2981fb7 100644
--- a/docs/ops/state/savepoints.zh.md
+++ b/docs/ops/state/savepoints.zh.md
@@ -27,7 +27,7 @@ under the License.
 
 ## 什么是 Savepoint ? Savepoint 与 Checkpoint 有什么不同?
 
-Savepoint 是依据 Flink [checkpointing 机制]({{ site.baseurl 
}}/zh/training/fault_tolerance.html)所创建的流作业执行状态的一致镜像。 你可以使用 Savepoint 进行 Flink 
作业的停止与重启、fork 或者更新。 Savepoint 由两部分组成:稳定存储(列入 HDFS,S3,...) 
上包含二进制文件的目录(通常很大),和元数据文件(相对较小)。 稳定存储上的文件表示作业执行状态的数据镜像。 Savepoint 
的元数据文件以(绝对路径)的形式包含(主要)指向作为 Savepoint 一部分的稳定存储上的所有文件的指针。
+Savepoint 是依据 Flink [checkpointing 机制]({{ site.baseurl 
}}/zh/learn-flink/fault_tolerance.html)所创建的流作业执行状态的一致镜像。 你可以使用 Savepoint 进行 
Flink 作业的停止与重启、fork 或者更新。 Savepoint 由两部分组成:稳定存储(列入 HDFS,S3,...) 
上包含二进制文件的目录(通常很大),和元数据文件(相对较小)。 稳定存储上的文件表示作业执行状态的数据镜像。 Savepoint 
的元数据文件以(绝对路径)的形式包含(主要)指向作为 Savepoint 一部分的稳定存储上的所有文件的指针。
 
 <div class="alert alert-warning">
 <strong>注意:</strong> 为了允许程序和 Flink 版本之间的升级,请务必查看以下有关<a href="#分配算子-id">分配算子 ID 
</a>的部分 。
diff --git a/docs/try-flink/flink-operations-playground.md 
b/docs/try-flink/flink-operations-playground.md
index 956f118..ebcbe02 100644
--- a/docs/try-flink/flink-operations-playground.md
+++ b/docs/try-flink/flink-operations-playground.md
@@ -316,7 +316,7 @@ docker-compose up -d taskmanager
 
 When the Master is notified about the new TaskManager, it schedules the tasks 
of the 
 recovering Job to the newly available TaskSlots. Upon restart, the tasks 
recover their state from
-the last successful [checkpoint]({{ site.baseurl 
}}/training/fault_tolerance.html) that was taken
+the last successful [checkpoint]({{ site.baseurl 
}}/learn-flink/fault_tolerance.html) that was taken
 before the failure and switch to the `RUNNING` state.
 
 The Job will quickly process the full backlog of input events (accumulated 
during the outage) 
@@ -806,7 +806,7 @@ You might have noticed that the *Click Event Count* 
application was always start
 and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
-* `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/training/fault_tolerance.html), 
+* `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/learn-flink/fault_tolerance.html), 
 which is Flink's fault-tolerance mechanism. If you run without it and go 
through 
 [failure and recovery](#observing-failure--recovery), you should will see that 
data is actually 
 lost.
diff --git a/docs/try-flink/flink-operations-playground.zh.md 
b/docs/try-flink/flink-operations-playground.zh.md
index 956f118..ebcbe02 100644
--- a/docs/try-flink/flink-operations-playground.zh.md
+++ b/docs/try-flink/flink-operations-playground.zh.md
@@ -316,7 +316,7 @@ docker-compose up -d taskmanager
 
 When the Master is notified about the new TaskManager, it schedules the tasks 
of the 
 recovering Job to the newly available TaskSlots. Upon restart, the tasks 
recover their state from
-the last successful [checkpoint]({{ site.baseurl 
}}/training/fault_tolerance.html) that was taken
+the last successful [checkpoint]({{ site.baseurl 
}}/learn-flink/fault_tolerance.html) that was taken
 before the failure and switch to the `RUNNING` state.
 
 The Job will quickly process the full backlog of input events (accumulated 
during the outage) 
@@ -806,7 +806,7 @@ You might have noticed that the *Click Event Count* 
application was always start
 and `--event-time` program arguments. By omitting these in the command of the 
*client* container in the 
 `docker-compose.yaml`, you can change the behavior of the Job.
 
-* `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/training/fault_tolerance.html), 
+* `--checkpointing` enables [checkpoint]({{ site.baseurl 
}}/learn-flink/fault_tolerance.html), 
 which is Flink's fault-tolerance mechanism. If you run without it and go 
through 
 [failure and recovery](#observing-failure--recovery), you should will see that 
data is actually 
 lost.

Reply via email to