This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink-training.git


The following commit(s) were added to refs/heads/master by this push:
     new aca6c47  [FLINK-26382] Add Chinese documents for flink-training 
exercises (#46)
aca6c47 is described below

commit aca6c47b79d486eb38969492c7e2dc8cb200d146
Author: T.C <tonny_1...@hotmail.com>
AuthorDate: Fri Apr 22 17:09:26 2022 +0800

    [FLINK-26382] Add Chinese documents for flink-training exercises (#46)
    
    Co-authored-by: Victor Xu <victor.uni...@gmail.com>
    Co-authored-by: Nico Kruber <n...@ververica.com>
---
 README.md                                       |   2 +
 README_zh.md                                    | 274 ++++++++++++++++++++++++
 build.gradle                                    |   2 +-
 hourly-tips/DISCUSSION.md                       |   2 +
 hourly-tips/{DISCUSSION.md => DISCUSSION_zh.md} |  46 ++--
 hourly-tips/README.md                           |   2 +
 hourly-tips/README_zh.md                        |  82 +++++++
 long-ride-alerts/DISCUSSION.md                  |   2 +
 long-ride-alerts/DISCUSSION_zh.md               |  50 +++++
 long-ride-alerts/README.md                      |   2 +
 long-ride-alerts/{README.md => README_zh.md}    |  68 +++---
 ride-cleansing/README.md                        |   2 +
 ride-cleansing/{README.md => README_zh.md}      |  55 ++---
 rides-and-fares/README.md                       |   2 +
 rides-and-fares/README_zh.md                    |  95 ++++++++
 15 files changed, 601 insertions(+), 85 deletions(-)

diff --git a/README.md b/README.md
index 0fc84e1..b1a65cb 100644
--- a/README.md
+++ b/README.md
@@ -17,6 +17,8 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+[中文版](./README_zh.md)
+
 # Apache Flink Training Exercises
 
 Exercises that accompany the training content in the documentation.
diff --git a/README_zh.md b/README_zh.md
new file mode 100644
index 0000000..2af9bba
--- /dev/null
+++ b/README_zh.md
@@ -0,0 +1,274 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Apache Flink 实践练习
+
+与文档中实践练习内容相关的练习。
+
+## 目录
+
+[**设置开发环境**](#set-up-your-development-environment)
+
+1. [软件要求](#software-requirements)
+1. [克隆并构建 flink-training 项目](#clone-and-build-the-flink-training-project)
+1. [将 flink-training 项目导入 
IDE](#import-the-flink-training-project-into-your-ide)
+
+[**使用出租车数据流(taxi data stream)**](#using-the-taxi-data-streams)
+
+1. [出租车车程(taxi ride)事件结构](#schema-of-taxi-ride-events)
+1. [出租车费用(taxi fare)事件结构](#schema-of-taxi-fare-events)
+
+[**如何做练习**](#how-to-do-the-lab-exercises)
+
+1. [了解数据](#learn-about-the-data)
+2. [在 IDE 中运行和调试 Flink 程序](#run-and-debug-flink-programs-in-your-ide)
+3. [练习、测试及解决方案](#exercises-tests-and-solutions)
+
+[**练习**](#lab-exercises)
+
+[**提交贡献**](#contributing)
+
+[**许可证**](#license)
+
+<a name="set-up-your-development-environment"></a>
+
+## 设置开发环境
+
+你需要设置便于进行开发、调试并运行实践练习的示例和解决方案的环境。
+
+<a name="software-requirements"></a>
+
+### 软件要求
+
+Linux、OS X 和 Windows 均可作为 Flink 程序和本地执行的开发环境。 Flink 开发设置需要以下软件,它们应该安装在系统上:
+
+- Git
+- Java 8 或者 Java 11 版本的 JDK (JRE不满足要求;目前不支持其他版本的Java)
+- 支持 Gradle 的 Java (及/或 Scala) 开发IDE
+    - 推荐使用 [IntelliJ](https://www.jetbrains.com/idea/), 但 
[Eclipse](https://www.eclipse.org/downloads/) 或 [Visual Studio 
Code](https://code.visualstudio.com/) (安装 [Java extension 
pack](https://code.visualstudio.com/docs/java/java-tutorial) 插件) 也可以用于Java环境
+    - 为了使用 Scala, 需要使用 IntelliJ (及其 [Scala 
plugin](https://plugins.jetbrains.com/plugin/1347-scala/) 插件)
+
+> **:information_source: Windows 用户须知:** 实践说明中提供的 shell 命令示例适用于 UNIX 环境。
+> 您可能会发现值得在 Windows 环境中设置 cygwin 或 WSL。对于开发 Flink 
作业(jobs),Windows工作的相当好:可以在单机上运行 Flink 集群、提交作业、运行 webUI 并在IDE中执行作业。
+
+<a name="clone-and-build-the-flink-training-project"></a>
+
+### 克隆并构建 flink-training 项目
+
+`flink-training` 仓库包含编程练习的习题、测试和参考解决方案。
+
+> **:information_source: 仓库格局:** 本仓库有几个分支,分别指向不同的 Apache Flink 版本,类似于 
[apache/flink](https://github.com/apache/flink) 仓库:
+> - 每个 Apache Flink 次要版本的发布分支,例如 `release-1.10`,和
+> - 一个指向当前 Flink 版本的 `master` 分支(不是 `flink:master`!)
+>
+> 如果想在当前 Flink 版本以外的版本上工作,请务必签出相应的分支。
+
+从 GitHub 克隆出 `flink-training` 仓库,导航到本地项目仓库并构建它:
+
+```bash
+git clone https://github.com/apache/flink-training.git
+cd flink-training
+./gradlew test shadowJar
+```
+
+如果是第一次构建,将会下载此 Flink 练习项目的所有依赖项。这通常需要几分钟时间,但具体取决于互联网连接速度。
+
+如果所有测试都通过并且构建成功,这说明你的实践练习已经开了一个好头。
+
+<details>
+<summary><strong>:cn: 中国用户: 点击这里了解如何使用本地 Maven 镜像。</strong></summary>
+
+如果你在中国,我们建议将 Maven 存储库配置为使用镜像。 可以通过在 [`build.gradle`](build.gradle) 
文件中取消注释此部分来做到这一点:
+
+```groovy
+    repositories {
+        // for access from China, you may need to uncomment this line
+        maven { url 'https://maven.aliyun.com/repository/public/' }
+        mavenCentral()
+        maven {
+            url "https://repository.apache.org/content/repositories/snapshots/";
+            mavenContent {
+                snapshotsOnly()
+            }
+        }
+    }
+```
+</details>
+
+<details>
+<summary><strong>启用 Scala (可选)</strong></summary>
+这个项目中的练习也可以使用 Scala ,但由于非 Scala 用户报告的一些问题,我们决定默认禁用 Scala。
+可以通过以下的方法修改 `gradle.properties` 文件以重新启用所有 Scala 练习和解决方案:
+
+[`gradle.properties`](gradle.properties) 文件如下:
+
+```properties
+#...
+
+# Scala exercises can be enabled by setting this to true
+org.gradle.project.enable_scala = true
+```
+
+如果需要,还可以选择性地在单个子项目中应用该插件。
+</details>
+
+<a name="import-the-flink-training-project-into-your-ide"></a>
+
+### 将 flink-training 项目导入IDE
+
+本项目应作为 gradle 项目导入到IDE中。
+
+然后应该可以打开 
[`RideCleansingTest`](ride-cleansing/src/test/java/org/apache/flink/training/exercises/ridecleansing/RideCleansingTest.java)
 并运行此测试。
+
+> **:information_source: Scala 用户须知:** 需要将 IntelliJ 与 JetBrains Scala 
插件一起使用,并且需要将 Scala 2.12 SDK 添加到项目结构的全局库部分以及工作模块中。
+> 当打开 Scala 文件时,IntelliJ 会要求提供后者(JetBrains Scala 插件)。
+> 请注意 Scala 2.12.8 及以上版本不受支持 (详细信息参见 [Flink Scala 
Versions](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/project-configuration/#scala-versions))!
+
+<a name="using-the-taxi-data-streams"></a>
+
+## 使用出租车数据流(taxi data stream)
+
+练习中使用数据[生成器(generators)](common/src/main/java/org/apache/flink/training/exercises/common/sources)产生模拟的事件流。
+该数据的灵感来自[纽约市出租车与豪华礼车管理局(New York City Taxi & Limousine 
Commission)](http://www.nyc.gov/html/tlc/html/home/home.shtml)
+的公开[数据集](https://uofi.app.box.com/NYCtaxidata)中有关纽约市出租车的车程情况。
+
+<a name="schema-of-taxi-ride-events"></a>
+
+### 出租车车程(taxi ride)事件结构
+
+出租车数据集包含有关纽约市个人出租车的车程信息。
+
+每次车程都由两个事件表示:行程开始(trip start)和行程结束(trip end)。
+
+每个事件都由十一个字段组成:
+
+```
+rideId         : Long      // 每次车程的唯一id
+taxiId         : Long      // 每一辆出租车的唯一id
+driverId       : Long      // 每一位司机的唯一id
+isStart        : Boolean   // 行程开始事件为 TRUE, 行程结束事件为 FALSE
+eventTime      : Instant   // 事件的时间戳
+startLon       : Float     // 车程开始位置的经度
+startLat       : Float     // 车程开始位置的维度
+endLon         : Float     // 车程结束位置的经度
+endLat         : Float     // 车程结束位置的维度
+passengerCnt   : Short     // 乘车人数
+```
+
+<a name="schema-of-taxi-fare-events"></a>
+
+### 出租车车费(taxi fare)事件结构
+
+还有一个包含与车程相关费用的数据集,它具有以下字段:
+
+```
+rideId         : Long      // 每次车程的唯一id
+taxiId         : Long      // 每一辆出租车的唯一id
+driverId       : Long      // 每一位司机的唯一id
+startTime      : Instant   // 车程开始时间
+paymentType    : String    // 现金(CASH)或刷卡(CARD)
+tip            : Float     // 小费
+tolls          : Float     // 过路费
+totalFare      : Float     // 总计车费
+```
+
+<a name="how-to-do-the-lab-exercises"></a>
+
+## 如何做练习
+
+在实践课程中,你将使用各种 Flink API 实现 Flink 程序。
+
+以下步骤将指导你完成使用提供的数据流、实现第一个 Flink 流程序以及在 IDE 中执行程序的过程。
+
+我们假设你已根据 [设置指南](#set-up-your-development-environment) 准备好了开发环境。
+
+<a name="learn-about-the-data"></a>
+
+### 了解数据
+
+最初的一组练习都是基于有关出租车车程和出租车车费的事件数据流。这些流由从输入文件读取数据的源函数产生。
+参见 [说明](#using-the-taxi-data-streams) 以了解如何使用它们。
+
+<a name="run-and-debug-flink-programs-in-your-ide"></a>
+
+### 在 IDE 中运行和调试 Flink 程序
+
+Flink 程序可以在 IDE 中执行和调试。这显著地简化了开发过程,并可提供类似于使用任何其他 Java(或 Scala)应用程序的体验。
+
+要在 IDE 中启动 Flink 程序,请运行它的 `main()` 方法。在后台,执行环境将在同一进程中启动本地 Flink 
实例。因此,可以在代码中放置断点并对其进行调试。
+
+如果 IDE 已导入 `flink-training` 项目,则可以通过以下方式运行(或调试)流式作业:
+
+- 在 IDE 中打开 `org.apache.flink.training.examples.ridecount.RideCountExample` 类
+- 使用 IDE 运行(或调试)`RideCountExample` 类的`main()` 方法
+
+<a name="exercises-tests-and-solutions"></a>
+
+### 练习、测试及解决方案
+
+每一项练习都包括:
+- 一个 `...Exercise` 类,其中包含运行所需地大多数样板代码
+- 一个 JUnit 测试类(`...Test`),其中包含一些针对实现的测试
+- 具有完整解决方案的 `...Solution` 类
+
+所有练习、测试和解决方案类都有 Java 和 Scala 版本。 它们都可以在 IntelliJ 中运行。
+
+> **:information_source: 注意:** 只要 `...Exercise` 类抛出 `MissingSolutionException` 
异常,那么所提供的 JUnit 测试类将忽略该失败并转而验证已提供的参考解决方案实现的正确性。
+
+你可以使用 `gradlew` 命令运行练习、解决方案和测试。
+
+运行测试:
+
+```bash
+./gradlew test
+./gradlew :<subproject>:test
+```
+
+对于 Java/Scala 练习和解决方案,我们提供了可以获取清单的特殊任务:
+
+```bash
+./gradlew printRunTasks
+```
+
+:point_down: 至此,你已准备好开始进行练习。 :point_down:
+
+<a name="lab-exercises"></a>
+
+## 练习
+
+1. [过滤流(车程清理)](ride-cleansing/README_zh.md)
+1. [有状态的增强(车程及车费)](rides-and-fares/README_zh.md)
+1. [窗口分析(每小时小费)](hourly-tips/README_zh.md)
+    - [练习](hourly-tips/README_zh.md)
+    - [讨论](hourly-tips/DISCUSSION_zh.md)
+1. [`ProcessFunction` 及定时器(长车程警报)](long-ride-alerts/README_zh.md)
+    - [练习](long-ride-alerts/README_zh.md)
+    - [讨论](long-ride-alerts/DISCUSSION.md)
+
+<a name="contributing"></a>
+
+## 提交贡献
+
+如果你想为此仓库做出贡献或添加新练习,请阅读 [提交贡献](CONTRIBUTING.md) 指南。
+
+<a name="license"></a>
+
+## 许可证
+
+本仓库中的代码基于 [Apache Software License 2](LICENSE) 许可证。
diff --git a/build.gradle b/build.gradle
index aea9c9f..8053a98 100644
--- a/build.gradle
+++ b/build.gradle
@@ -57,7 +57,7 @@ allprojects {
                     'KIND, either express or implied.  See the License for 
the\n' +
                     'specific language governing permissions and 
limitations\n' +
                     'under the License.\n' +
-                    '-->\n\n', '# '
+                    '-->\n\n', '(# )|(\\[.*\\]\\(.*\\))'
 
         }
 
diff --git a/hourly-tips/DISCUSSION.md b/hourly-tips/DISCUSSION.md
index ef7c6c0..52d0432 100644
--- a/hourly-tips/DISCUSSION.md
+++ b/hourly-tips/DISCUSSION.md
@@ -17,6 +17,8 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+[中文版](./DISCUSSION_zh.md)
+
 # Lab Discussion: Windowed Analytics (Hourly Tips)
 
 (Discussion of [Lab: Windowed Analytics (Hourly Tips)](./))
diff --git a/hourly-tips/DISCUSSION.md b/hourly-tips/DISCUSSION_zh.md
similarity index 58%
copy from hourly-tips/DISCUSSION.md
copy to hourly-tips/DISCUSSION_zh.md
index ef7c6c0..c43b282 100644
--- a/hourly-tips/DISCUSSION.md
+++ b/hourly-tips/DISCUSSION_zh.md
@@ -17,11 +17,13 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Lab Discussion: Windowed Analytics (Hourly Tips)
+# 练习讨论: 窗口分析(每小时小费)
 
-(Discussion of [Lab: Windowed Analytics (Hourly Tips)](./))
+(关于[窗口分析(每小时小费)](./README_zh.md)的讨论)
 
-The Java and Scala reference solutions illustrate two different approaches, 
though they have a lot of similarities. Both first compute the sum of the tips 
for every hour for each driver. In 
[`HourlyTipsSolution.java`](src/solution/java/org/apache/flink/training/solutions/hourlytips/HourlyTipsSolution.java)
 that looks like this,
+尽管有很多相似之处,Java 和 Scala 参考解决方案展示了两种不同的方法.
+两者首先都计算每个司机每小时的小费总和。
+[`HourlyTipsSolution.java`](src/solution/java/org/apache/flink/training/solutions/hourlytips/HourlyTipsSolution.java)
 看起来像这样,
 
 ```java
 DataStream<Tuple3<Long, Long, Float>> hourlyTips = fares
@@ -30,7 +32,7 @@ DataStream<Tuple3<Long, Long, Float>> hourlyTips = fares
     .process(new AddTips());
 ```
 
-where a `ProcessWindowFunction` does all the heavy lifting:
+其中, `ProcessWindowFunction` 完成了所有繁重的工作:
 
 ```java
 public static class AddTips extends ProcessWindowFunction<
@@ -46,9 +48,10 @@ public static class AddTips extends ProcessWindowFunction<
 }
 ```
 
-This is straightforward, but has the drawback that it is buffering all of the 
`TaxiFare` objects in the windows until the windows are triggered, which is 
less efficient than computing the sum of the tips incrementally, using a 
`reduce` or `agggregate` function.
+这很简单,但缺点是它会缓冲窗口中所有的 `TaxiFare` 对象,直到窗口被触发。
+相比使用 `reduce` 或 `aggregate` 方法来增量计算小费总额,这种方法的效率较低。
 
-The [Scala 
solution](src/solution/scala/org/apache/flink/training/solutions/hourlytips/scala/HourlyTipsSolution.scala)
 uses a `reduce` function
+[Scala 
解决方案](src/solution/scala/org/apache/flink/training/solutions/hourlytips/scala/HourlyTipsSolution.scala)使用了
 `reduce` 函数:
 
 ```scala
 val hourlyTips = fares
@@ -60,7 +63,7 @@ val hourlyTips = fares
     new WrapWithWindowInfo())
 ```
 
-along with this `ProcessWindowFunction`
+连同这样的 `ProcessWindowFunction`:
 
 ```scala
 class WrapWithWindowInfo() extends ProcessWindowFunction[(Long, Float), (Long, 
Long, Float), Long, TimeWindow] {
@@ -71,9 +74,9 @@ class WrapWithWindowInfo() extends 
ProcessWindowFunction[(Long, Float), (Long, L
 }
 ```
 
-to compute `hourlyTips`.
+以计算 `hourlyTips`.
 
-Having computed `hourlyTips`, it is a good idea to take a look at what this 
stream looks like. `hourlyTips.print()` yields something like this,
+计算出 `hourlyTips` 之后,让我们来看看这个流是什么样的。`hourlyTips.print()` 产生了类似这样的结果:
 
 ```
 2> (1577883600000,2013000185,33.0)
@@ -89,9 +92,9 @@ Having computed `hourlyTips`, it is a good idea to take a 
look at what this stre
 ...
 ```
 
-or in other words, lots of tuples for each hour that show for each driver, the 
sum of their tips for that hour.
+可以看到,每个小时都有大量的三元组显示每个司机在这一个小时内的小费总额。
 
-Now, how to find the maximum within each hour? The reference solutions both do 
this, more or less:
+现在,如何找到每个小时内的最大值? 参考解决方案或多或少都这样做:
 
 ```java
 DataStream<Tuple3<Long, Long, Float>> hourlyMax = hourlyTips
@@ -99,7 +102,7 @@ DataStream<Tuple3<Long, Long, Float>> hourlyMax = hourlyTips
     .maxBy(2);
 ```
 
-which works just fine, producing this stream of results:
+这样做也不错,因为它产生了正确的结果流:
 
 ```
 3> (1577883600000,2013000089,76.0)
@@ -110,7 +113,7 @@ which works just fine, producing this stream of results:
 4> (1577901600000,2013000072,123.0)
 ```
 
-But, what if we were to do this, instead?
+但是,如果换成这样呢?
 
 ```java
 DataStream<Tuple3<Long, Long, Float>> hourlyMax = hourlyTips
@@ -118,13 +121,11 @@ DataStream<Tuple3<Long, Long, Float>> hourlyMax = 
hourlyTips
     .maxBy(2);
 ```
 
-This says to group the stream of `hourlyTips` by timestamp, and within each 
timestamp, find the maximum of the sum of the tips.
-That sounds like it is exactly what we want. And while this alternative does 
find the same results,
-there are a couple of reasons why it is not a very good solution.
+这表示按时间戳对 `hourlyTips` 流进行分组,并在每个时间戳分组内找到小费总和的最大值,而听起来这正是我们想要的。
+虽然这个替代方案确实找到了相同的结果,但是有几个原因可以解释它为什么不是一个很好的解决方案。
 
-First, instead of producing a single result at the end of each window, with 
this approach we get a stream that is
-continuously reporting the maximum achieved so far, for each key (i.e., each 
hour), which is an awkward way to consume
-the result if all you wanted was a single value for each hour.
+首先,这种方法不是在每个窗口的结束时产生一个结果,而是创建了一个连续报告每个键值(即每小时)迄今为止达到的最大值的流。
+如果仅仅是想得到每个小时中的一个单一值的话,那么这是一种笨拙的消费方式。
 
 ```
 1> (1577883600000,2013000108,14.0)
@@ -143,10 +144,9 @@ the result if all you wanted was a single value for each 
hour.
 ...
 ```
 
-Second, Flink will be keeping in state the maximum seen so far for each key 
(each hour), forever.
-Flink has no idea that these keys are event-time timestamps, and that the 
watermarks could be used as
-an indicator of when this state can be cleared -- to get those semantics, we 
need to use windows.
+其次,Flink 将永远保持每个键值(每小时)迄今为止出现的最大值。
+Flink 不知道这些键值是事件的时间戳,也不知道水位线可以被用作何时清除此状态的指示器——为了获得这些语义,我们需要使用窗口。
 
 -----
 
-[**Back to Labs Overview**](../README.md#lab-exercises)
+[**返回练习概述**](../README_zh.md#lab-exercises)
diff --git a/hourly-tips/README.md b/hourly-tips/README.md
index a49a4c5..177dfe4 100644
--- a/hourly-tips/README.md
+++ b/hourly-tips/README.md
@@ -17,6 +17,8 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+[中文版](./README_zh.md)
+
 # Lab: Windowed Analytics (Hourly Tips)
 
 The task of the "Hourly Tips" exercise is to identify, for each hour, the 
driver earning the most tips. It's easiest to approach this in two steps: first 
use hour-long windows that compute the total tips for each driver during the 
hour, and then from that stream of window results, find the driver with the 
maximum tip total for each hour.
diff --git a/hourly-tips/README_zh.md b/hourly-tips/README_zh.md
new file mode 100644
index 0000000..67edce1
--- /dev/null
+++ b/hourly-tips/README_zh.md
@@ -0,0 +1,82 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 练习: 窗口分析 (每小时小费)
+
+“每小时小费”练习的任务是确定每小时赚取最多小费的司机。
+最简单的方法是通过两个步骤来解决这个问题:首先使用一个小时长的窗口来计算每个司机在一小时内的总小费,然后从该窗口结果流中找到每小时总小费最多的司机。
+
+请注意,该程序应使用事件时间(event time)。
+
+### 输入数据
+
+本练习的输入数据是由[出租车车费流生成器](../README_zh.md#using-the-taxi-data-streams)生成的 
`TaxiFare` 事件流。
+
+`TaxiFareGenerator` 用时间戳和水位线(watermark)注解生成的 `DataStream<TaxiFare>`。
+因此,无需提供自定义的时间戳和水印分配器即可正确使用事件时间。
+
+### 期望输出
+
+所希望的结果是每小时产生一个 `Tuple3<Long, Long, Float>` 记录的数据流。
+这个记录(`Tuple3<Long, Long, Float>`)应包含该小时结束时的时间戳(对应三元组的第一个元素)、
+该小时内获得小费最多的司机的 driverId(对应三元组的第二个元素)以及他的实际小费总数(对应三元组的第三个元素))。
+
+结果流应打印到标准输出。
+
+## 入门指南
+
+> :information_source: 最好在 IDE 的 flink-training 项目中找到这些类,而不是使用本节中源文件的链接。
+
+### 练习相关类
+
+- Java:  
[`org.apache.flink.training.exercises.hourlytips.HourlyTipsExercise`](src/main/java/org/apache/flink/training/exercises/hourlytips/HourlyTipsExercise.java)
+- Scala: 
[`org.apache.flink.training.exercises.hourlytips.scala.HourlyTipsExercise`](src/main/scala/org/apache/flink/training/exercises/hourlytips/scala/HourlyTipsExercise.scala)
+
+### 测试
+
+- Java:  
[`org.apache.flink.training.exercises.hourlytips.HourlyTipsTest`](src/test/java/org/apache/flink/training/exercises/hourlytips/HourlyTipsTest.java)
+- Scala: 
[`org.apache.flink.training.exercises.hourlytips.scala.HourlyTipsTest`](src/test/scala/org/apache/flink/training/exercises/hourlytips/scala/HourlyTipsTest.scala)
+
+## 实现提示
+
+<details>
+<summary><strong>程序结构</strong></summary>
+
+请注意,可以将一组时间窗口逐个级联,只要时间帧兼容(第二组窗口的持续时间需要是第一组的倍数)。
+因此,首先可以得到一个由 `driverId` 键值分隔的具有一小时窗口的初始数据集,并使用它来创建一个 
`(endOfHourTimestamp,driverId,totalTips)` 流。
+然后使用另一个一小时窗口(该窗口不是用键值分隔的),从第一个窗口中查找具有最大 `totalTips` 的记录。
+</details>
+
+## 相关文档
+
+- 
[窗口](https://nightlies.apache.org/flink/flink-docs-stable/zh/docs/dev/datastream/operators/windows)
+- 
[参阅窗口聚合操作章节](https://nightlies.apache.org/flink/flink-docs-stable/zh/docs/dev/datastream/operators/overview/#datastream-transformations)
+
+## 参考解决方案
+
+项目中提供了参考解决方案:
+
+- Java:  
[`org.apache.flink.training.solutions.hourlytips.HourlyTipsSolution`](src/solution/java/org/apache/flink/training/solutions/hourlytips/HourlyTipsSolution.java)
+- Scala: 
[`org.apache.flink.training.solutions.hourlytips.scala.HourlyTipsSolution`](src/solution/scala/org/apache/flink/training/solutions/hourlytips/scala/HourlyTipsSolution.scala)
+
+-----
+
+[**练习讨论: 窗口分析 (每小时小费)**](DISCUSSION_zh.md)
+
+[**返回练习概述**](../README_zh.md#lab-exercises)
diff --git a/long-ride-alerts/DISCUSSION.md b/long-ride-alerts/DISCUSSION.md
index 143b5e2..7be4394 100644
--- a/long-ride-alerts/DISCUSSION.md
+++ b/long-ride-alerts/DISCUSSION.md
@@ -17,6 +17,8 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+[中文版](./DISCUSSION_zh.md)
+
 # Lab Discussion: `KeyedProcessFunction` and Timers (Long Ride Alerts)
 
 (Discussion of [Lab: `KeyedProcessFunction` and Timers (Long Ride Alerts)](./))
diff --git a/long-ride-alerts/DISCUSSION_zh.md 
b/long-ride-alerts/DISCUSSION_zh.md
new file mode 100644
index 0000000..25996ef
--- /dev/null
+++ b/long-ride-alerts/DISCUSSION_zh.md
@@ -0,0 +1,50 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 练习讨论: `ProcessFunction` 及定时器(长车程警报)
+
+(关于[练习: `ProcessFunction` 及定时器(长车程警报)](./README_zh.md)的讨论)
+
+### 分析
+
+这些情况值得注意:
+
+* _缺少 START 事件_。 然后 END 事件将被无限期地存储在状态中(这是一个漏洞!)。
+* _END 事件丢失_。 计时器将被触发并且状态将被清除(这没关系)。
+* _END 事件在计时器触发并清除状态后到达。_ 在这种情况下,END 事件将被无限期地存储在状态中(这是另一个漏洞!)。
+
+这些漏洞可以通过使用 
[状态有效期](https://nightlies.apache.org/flink/flink-docs-stable/zh/docs/dev/datastream/fault-tolerance/state/#state-time-to-live-ttl)
 或其他计时器,以最终清除残留的状态。
+
+### 底线
+
+不管如何聪明地处理保持什么样的状态,以及选择保持多长时间,我们最终都应该清除它——否则状态将以无限的方式增长。
+如果丢失了这些信息,我们将冒着延迟事件导致错误或重复结果的风险。
+
+在永久地保持状态与在事件延迟时偶尔出错之间的权衡是有状态流处理中固有的挑战。
+
+### 如果你想走得更远
+
+对于下列的每一项,添加测试以检查所需的行为。
+
+* 扩展解决方案,使其永远不会泄漏状态。
+* 定义事件丢失的含义,检测丢失的 START 和 END 事件,并将一些通知发送到旁路输出。
+
+-----
+
+[**返回练习概述**](../README_zh.md#lab-exercises)
diff --git a/long-ride-alerts/README.md b/long-ride-alerts/README.md
index 07bc566..d1d98b1 100644
--- a/long-ride-alerts/README.md
+++ b/long-ride-alerts/README.md
@@ -17,6 +17,8 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+[中文版](./README_zh.md)
+
 # Lab: `ProcessFunction` and Timers (Long Ride Alerts)
 
 The goal of the "Long Ride Alerts" exercise is to provide a warning whenever a 
taxi ride
diff --git a/long-ride-alerts/README.md b/long-ride-alerts/README_zh.md
similarity index 53%
copy from long-ride-alerts/README.md
copy to long-ride-alerts/README_zh.md
index 07bc566..3b1d060 100644
--- a/long-ride-alerts/README.md
+++ b/long-ride-alerts/README_zh.md
@@ -17,79 +17,77 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Lab: `ProcessFunction` and Timers (Long Ride Alerts)
+# 练习: `ProcessFunction` 及定时器(长车程警报)
 
-The goal of the "Long Ride Alerts" exercise is to provide a warning whenever a 
taxi ride
-lasts for more than two hours.
+“长车程警报”练习的目标是对于持续超过两个小时的出租车车程发出警报。
 
-This should be done using the event time timestamps and watermarks that are 
provided in the data stream.
+这应该使用数据流中提供的事件时间时间戳和水位线来完成。
 
-The stream is out-of-order, and it is possible that the END event for a ride 
will be processed before
-its START event.
+流是无序的,并且可能会在其 START 事件之前处理车程的 END 事件。
 
-An END event may be missing, but you may assume there are no duplicated 
events, and no missing START events.
+END 事件可能会丢失,但你可以假设没有重复的事件,也没有丢失的 START 事件。
 
-It is not enough to simply wait for the END event and calculate the duration, 
as we want to be alerted
-about the long ride as soon as possible.
+仅仅等待 END 事件并计算持续时间是不够的,因为我们希望尽快收到关于长车程的警报。
 
-You should eventually clear any state you create.
+最终应该清除创建的任何状态。
 
-### Input Data
+### 输入数据
 
-The input data of this exercise is a `DataStream` of taxi ride events.
+输入数据是出租车乘车事件的 `DataStream`。
 
-### Expected Output
+### 期望输出
 
-The result of the exercise should be a `DataStream<LONG>` that contains the 
`rideId` for rides
-with a duration that exceeds two hours.
+所希望的结果应该是一个 `DataStream<LONG>`,其中包含持续时间超过两小时的车程的 `rideId`。
 
-The resulting stream should be printed to standard out.
+结果流应打印到标准输出。
 
-## Getting Started
+## 入门指南
 
-> :information_source: Rather than following these links to the sources, you 
might prefer to open these classes in your IDE.
-
-### Exercise Classes
+> :information_source: 最好在 IDE 的 flink-training 项目中找到这些类,而不是使用本节中源文件的链接。
+>
+### 练习相关类
 
 - Java:  
[`org.apache.flink.training.exercises.longrides.LongRidesExercise`](src/main/java/org/apache/flink/training/exercises/longrides/LongRidesExercise.java)
 - Scala: 
[`org.apache.flink.training.exercises.longrides.scala.LongRidesExercise`](src/main/scala/org/apache/flink/training/exercises/longrides/scala/LongRidesExercise.scala)
 
-### Unit Tests
+### 单元测试
 
 - Java:  
[`org.apache.flink.training.exercises.longrides.LongRidesUnitTest`](src/test/java/org/apache/flink/training/exercises/longrides/LongRidesUnitTest.java)
 - Scala: 
[`org.apache.flink.training.exercises.longrides.scala.LongRidesUnitTest`](src/test/scala/org/apache/flink/training/exercises/longrides/scala/LongRidesUnitTest.scala)
 
-### Integration Tests
+### 集成测试
 
 - Java:  
[`org.apache.flink.training.exercises.longrides.LongRidesIntegrationTest`](src/test/java/org/apache/flink/training/exercises/longrides/LongRidesIntegrationTest.java)
 - Scala: 
[`org.apache.flink.training.exercises.longrides.scala.LongRidesIntegrationTest`](src/test/scala/org/apache/flink/training/exercises/longrides/scala/LongRidesIntegrationTest.scala)
 
-## Implementation Hints
+## 实现提示
 
 <details>
-<summary><strong>Overall approach</strong></summary>
+<summary><strong>整体方案</strong></summary>
 
-This exercise revolves around using a `KeyedProcessFunction` to manage some 
state and event time timers,
-and doing so in a way that works even when the END event for a given `rideId` 
arrives before the START.
-The challenge is figuring out what state and timers to use, and when to set 
and clear the state (and timers).
+这个练习围绕着使用 `KeyedProcessFunction` 来管理一些状态和事件时间计时器,
+使用这种方法即使在给定 `rideId` 的 END 事件在 START 之前到达时也能正常工作。
+挑战在于弄清楚要使用什么状态和计时器,以及何时设置和清除状态(和计时器)。
 </details>
 
-## Documentation
+## 相关文档
 
-- 
[ProcessFunction](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/operators/process_function)
-- [Working with 
State](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/fault-tolerance/state)
+- 
[ProcessFunction](https://nightlies.apache.org/flink/flink-docs-stable/zh/docs/dev/datastream/operators/process_function)
+- 
[使用状态](https://nightlies.apache.org/flink/flink-docs-stable/zh/docs/dev/datastream/fault-tolerance/state)
 
-## After you've completed the exercise
+## 完成练习后
 
-Read the [discussion of the reference solutions](DISCUSSION.md).
+阅读[参考解决方案的讨论](DISCUSSION_zh.md).
 
-## Reference Solutions
+## 参考解决方案
 
-Reference solutions:
+项目中提供了参考解决方案:
 
 - Java API:  
[`org.apache.flink.training.solutions.longrides.LongRidesSolution`](src/solution/java/org/apache/flink/training/solutions/longrides/LongRidesSolution.java)
 - Scala API: 
[`org.apache.flink.training.solutions.longrides.scala.LongRidesSolution`](src/solution/scala/org/apache/flink/training/solutions/longrides/scala/LongRidesSolution.scala)
 
 -----
 
-[**Back to Labs Overview**](../README.md#lab-exercises)
+[**练习讨论: `ProcessFunction` 及定时器(长车程警报)**](DISCUSSION_zh.md)
+
+[**返回练习概述**](../README_zh.md#lab-exercises)
diff --git a/ride-cleansing/README.md b/ride-cleansing/README.md
index 254291b..cbe68ae 100644
--- a/ride-cleansing/README.md
+++ b/ride-cleansing/README.md
@@ -17,6 +17,8 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+[中文版](./README_zh.md)
+
 # Lab: Filtering a Stream (Ride Cleansing)
 
 If you haven't already done so, you'll need to first [setup your Flink 
development environment](../README.md). See [How to do the 
Labs](../README.md#how-to-do-the-labs) for an overall introduction to these 
exercises.
diff --git a/ride-cleansing/README.md b/ride-cleansing/README_zh.md
similarity index 51%
copy from ride-cleansing/README.md
copy to ride-cleansing/README_zh.md
index 254291b..36b194a 100644
--- a/ride-cleansing/README.md
+++ b/ride-cleansing/README_zh.md
@@ -17,73 +17,76 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Lab: Filtering a Stream (Ride Cleansing)
+# 练习: 过滤流(车程清理)
 
-If you haven't already done so, you'll need to first [setup your Flink 
development environment](../README.md). See [How to do the 
Labs](../README.md#how-to-do-the-labs) for an overall introduction to these 
exercises.
+如果尚未设置 Flink 开发环境,请参阅[指南](../README_zh.md)。
+有关练习的总体介绍,请参阅[如何做练习](../README_zh.md#how-to-do-the-labs)。
 
-The task of the "Taxi Ride Cleansing" exercise is to cleanse a stream of 
TaxiRide events by removing events that start or end outside of New York City.
+"出租车车程清理"练习的任务是通过删除在纽约市以外开始或结束的车程来清理一系列的 `TaxiRide` 事件。
 
-The `GeoUtils` utility class provides a static method `isInNYC(float lon, 
float lat)` to check if a location is within the NYC area.
+`GeoUtils` 实用程序类提供了一个静态方法 `isInNYC(float lon, float lat)` 来检查某个位置是否在纽约市区域内。
 
-### Input Data
+### 输入数据
 
-This exercise is based on a stream of `TaxiRide` events, as described in 
[Using the Taxi Data Streams](../README.md#using-the-taxi-data-streams).
+此练习基于 `TaxiRide` 事件流,如[使用出租车数据流](../README.md#using-the-taxi-data-streams)中所述。
 
-### Expected Output
+### 期望输出
 
-The result of the exercise should be a `DataStream<TaxiRide>` that only 
contains events of taxi rides which both start and end in the New York City 
area as defined by `GeoUtils.isInNYC()`.
+练习的结果应该是一个 `DataStream<TaxiRide>`,它只包含在 `GeoUtils.isInNYC()` 
定义的纽约市地区开始和结束的出租车车程事件。
 
-The resulting stream should be printed to standard out.
+结果流应打印到标准输出。
 
-## Getting Started
+## 入门指南
 
-> :information_source: Rather than following the links to the sources in this 
section, you'll do better to find these classes in the flink-training project 
in your IDE.
-> Both IntelliJ and Eclipse have ways to make it easy to search for and 
navigate to classes and files. For IntelliJ, see [the help on 
searching](https://www.jetbrains.com/help/idea/searching-everywhere.html), or 
simply press the Shift key twice and then continue typing something like 
`RideCleansing` and then select from the choices that popup.
+> :information_source: 最好在 IDE 的 flink-training 项目中找到这些类,而不是使用本节中源文件的链接。
+> IntelliJ 和 Eclipse 都可以轻松搜索和导航到类和文件。对于 
IntelliJ,请参阅[搜索帮助](https://www.jetbrains.com/help/idea/searching-everywhere.html),或者只需按
 Shift 键两次,然后继续输入类似 `RideCleansing` 的内容,接着从弹出的选项中选择。
 
-### Exercise Classes
+### 练习相关类
 
-This exercise uses these classes:
+本练习使用以下类:
 
 - Java:  
[`org.apache.flink.training.exercises.ridecleansing.RideCleansingExercise`](src/main/java/org/apache/flink/training/exercises/ridecleansing/RideCleansingExercise.java)
 - Scala: 
[`org.apache.flink.training.exercises.ridecleansing.scala.RideCleansingExercise`](src/main/scala/org/apache/flink/training/exercises/ridecleansing/scala/RideCleansingExercise.scala)
 
-### Tests
+### 测试
 
-You will find the tests for this exercise in
+练习的测试位于:
 
 - Java:  
[`org.apache.flink.training.exercises.ridecleansing.RideCleansingIntegrationTest`](src/test/java/org/apache/flink/training/exercises/ridecleansing/RideCleansingIntegrationTest.java)
 - Java:  
[`org.apache.flink.training.exercises.ridecleansing.RideCleansingUnitTest`](src/test/java/org/apache/flink/training/exercises/ridecleansing/RideCleansingUnitTest.java)
 - Scala: 
[`org.apache.flink.training.exercises.ridecleansing.scala.RideCleansingIntegrationTest`](src/test/scala/org/apache/flink/training/exercises/ridecleansing/scala/RideCleansingIntegrationTest.scala)
 - Scala: 
[`org.apache.flink.training.exercises.ridecleansing.scala.RideCleansingUnitTest`](src/test/scala/org/apache/flink/training/exercises/ridecleansing/scala/RideCleansingUnitTest.scala)
 
-Like most of these exercises, at some point the `RideCleansingExercise` class 
throws an exception
+像大多数练习一样,在某些时候,`RideCleansingExercise` 类会抛出异常
 
 ```java
 throw new MissingSolutionException();
 ```
 
-Once you remove this line, the test will fail until you provide a working 
solution. You might want to first try something clearly broken, such as
+一旦删除此行,测试将会失败,直到你提供有效的解决方案。你也可能想先尝试一些明显错误的代码,例如
 
 ```java
 return false;
 ```
 
-in order to verify that the test does indeed fail when you make a mistake, and 
then work on implementing a proper solution.
+如此验证这些错误代码确实可导致测试失败,然后可以向着正确的方向实现适当的解决方案。
 
-## Implementation Hints
+## 实现提示
 
 <details>
-<summary><strong>Filtering Events</strong></summary>
+<summary><strong>过滤事件</strong></summary>
 
-Flink's DataStream API features a `DataStream.filter(FilterFunction)` 
transformation to filter events from a data stream. The `GeoUtils.isInNYC()` 
function can be called within a `FilterFunction` to check if a location is in 
the New York City area. Your filter function should check both the starting and 
ending locations of each ride.
+Flink 的 DataStream API 提供了一个 `DataStream.filter(FilterFunction)` 
转换函数来过滤数据流中的事件。
+可以在 `FilterFunction` 中调用 `GeoUtils.isInNYC()` 函数来检查某个位置是否在纽约市地区。
+过滤器应检查每次车程的起点和终点。
 </details>
 
-## Documentation
+## 相关文档
 
-- [DataStream 
API](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/overview)
+- [DataStream 
API](https://nightlies.apache.org/flink/flink-docs-stable/zh/docs/dev/datastream/overview)
 - [Flink 
JavaDocs](https://nightlies.apache.org/flink/flink-docs-stable/api/java)
 
-## Reference Solutions
+## 参考解决方案
 
 Reference solutions are available in this project:
 
@@ -92,4 +95,4 @@ Reference solutions are available in this project:
 
 -----
 
-[**Back to Labs Overview**](../README.md#lab-exercises)
+[**返回练习概述**](../README_zh.md#lab-exercises)
diff --git a/rides-and-fares/README.md b/rides-and-fares/README.md
index 2172513..9bf088a 100644
--- a/rides-and-fares/README.md
+++ b/rides-and-fares/README.md
@@ -17,6 +17,8 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+[中文版](./README_zh.md)
+
 # Lab: Stateful Enrichment (Rides and Fares)
 
 The goal of this exercise is to join together the `TaxiRide` and `TaxiFare` 
records for each ride.
diff --git a/rides-and-fares/README_zh.md b/rides-and-fares/README_zh.md
new file mode 100644
index 0000000..f54852f
--- /dev/null
+++ b/rides-and-fares/README_zh.md
@@ -0,0 +1,95 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 练习: 有状态的增强(车程及车费)
+
+本练习的目标是将每次车程的 `TaxiRide` 和 `TaxiFare` 记录连接在一起。
+
+对于每个不同的 `rideId`,恰好有三个事件:
+
+1. `TaxiRide` START 事件
+1. `TaxiRide` END 事件
+1. 一个 `TaxiFare` 事件(其时间戳恰好与开始时间匹配)
+
+最终的结果应该是 `DataStream<RideAndFare>`,每个不同的 `rideId` 都产生一个 `RideAndFare` 记录。
+每个 `RideAndFare` 都应该将某个 `rideId` 的 `TaxiRide` START 事件与其匹配的 `TaxiFare` 配对。
+
+### 输入数据
+
+在练习中,你将使用两个数据流,一个使用由 `TaxiRideSource` 生成的 `TaxiRide` 事件,另一个使用由 
`TaxiFareSource` 生成的 `TaxiFare` 事件。
+有关如何使用这些流生成器的信息,请参阅 [使用出租车数据流](../README_zh.md#using-the-taxi-data-streams)。
+
+### 期望输出
+
+所希望的结果是一个 `RideAndFare` 记录的数据流,每个不同的 `rideId` 都有一条这样的记录。
+本练习设置为忽略 END 事件,你应该连接每次乘车的 START 事件及其相应的车费事件。
+
+一旦具有了相互关联的车程和车费事件,你可以使用 `new RideAndFare(ride, fare)` 方法为输出流创建所需的对象。
+
+流将会被打印到标准输出。
+
+## 入门指南
+
+> :information_source: 最好在 IDE 的 flink-training 项目中找到这些类,而不是使用本节中源文件的链接。
+
+### 练习相关类
+
+- Java:  
[`org.apache.flink.training.exercises.ridesandfares.RidesAndFaresExercise`](src/main/java/org/apache/flink/training/exercises/ridesandfares/RidesAndFaresExercise.java)
+- Scala: 
[`org.apache.flink.training.exercises.ridesandfares.scala.RidesAndFaresExercise`](src/main/scala/org/apache/flink/training/exercises/ridesandfares/scala/RidesAndFaresExercise.scala)
+
+### 集成测试
+
+- Java:  
[`org.apache.flink.training.exercises.ridesandfares.RidesAndFaresIntegrationTest`](src/test/java/org/apache/flink/training/exercises/ridesandfares/RidesAndFaresIntegrationTest.java)
+- Scala: 
[`org.apache.flink.training.exercises.ridesandfares.scala.RidesAndFaresIntegrationTest`](src/test/scala/org/apache/flink/training/exercises/ridesandfares/scala/RidesAndFaresIntegrationTest.scala)
+
+## 实现提示
+
+<details>
+<summary><strong>程序结构</strong></summary>
+
+可以使用 `RichCoFlatMap` 来实现连接操作。请注意,你无法控制每个 rideId 
的车程和车费记录的到达顺序,因此需要存储其中一个事件,直到与其匹配的另一事件到达。
+此时你可以创建并发出 `RideAndFare` 以将两条记录连接在一起。
+</details>
+
+<details>
+<summary><strong>使用状态</strong></summary>
+
+应该使用由 Flink 管理的、按键值分割(keyed)的状态来缓冲想要暂时保存的数据,直到匹配事件到达,并确保在不再需要时清除该状态。
+</details>
+
+## 讨论
+
+出于练习的目的,可以假设 START 和 fare 事件完美配对。
+但是在现实世界的应用程序中,你应该担心每当一个事件丢失时,同一个 `rideId` 的另一个事件的状态将被永远保持。
+在 [稍后的练习](../long-ride-alerts/README_zh.md) 中,我们将看到 `ProcessFunction` 
和定时器,它们将有助于处理这样的情况。
+
+## 相关文档
+
+- 
[使用状态](https://nightlies.apache.org/flink/flink-docs-stable/zh/docs/dev/datastream/fault-tolerance/state)
+
+## 参考解决方案
+
+项目中提供了参考解决方案:
+
+- Java:  
[`org.apache.flink.training.solutions.ridesandfares.RidesAndFaresSolution`](src/solution/java/org/apache/flink/training/solutions/ridesandfares/RidesAndFaresSolution.java)
+- Scala: 
[`org.apache.flink.training.solutions.ridesandfares.scala.RidesAndFaresSolution`](src/solution/scala/org/apache/flink/training/solutions/ridesandfares/scala/RidesAndFaresSolution.scala)
+
+-----
+
+[**返回练习概述**](../README_zh.md#lab-exercises)

Reply via email to