This is an automated email from the ASF dual-hosted git repository.

xccui pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 6af7f48  [FLINK-11528][docs] Translate the "Use Cases" page into 
Chinese and fix some typos
6af7f48 is described below

commit 6af7f48bd754fb9a5635c25ec7656677fcf10b9b
Author: Xingcan Cui <xc...@apache.org>
AuthorDate: Thu Mar 14 00:30:05 2019 -0400

    [FLINK-11528][docs] Translate the "Use Cases" page into Chinese and fix 
some typos
    
    This closes #188.
---
 content/usecases.html    |  2 +-
 content/zh/usecases.html | 97 ++++++++++++++++++++++++------------------------
 usecases.md              |  2 +-
 usecases.zh.md           | 97 ++++++++++++++++++++++++------------------------
 4 files changed, 98 insertions(+), 100 deletions(-)

diff --git a/content/usecases.html b/content/usecases.html
index 9c12aba..fe87882 100644
--- a/content/usecases.html
+++ b/content/usecases.html
@@ -189,7 +189,7 @@
 
 <p>The limits of event-driven applications are defined by how well a stream 
processor can handle time and state. Many of Flink’s outstanding features are 
centered around these concepts. Flink provides a rich set of state primitives 
that can manage very large data volumes (up to several terabytes) with 
exactly-once consistency guarantees. Moreover, Flink’s support for event-time, 
highly customizable window logic, and fine-grained control of time as provided 
by the <code>ProcessFunction</c [...]
 
-<p>However, Flink’s outstanding feature for event-driven applications are 
savepoints. A savepoint a consistent state image that can be used as a starting 
point for compatible applications. Given a savepoint, an application can be 
updated or adapt its scale, or multiple versions of an application can be 
started for A/B testing.</p>
+<p>However, Flink’s outstanding feature for event-driven applications is 
savepoint. A savepoint is a consistent state image that can be used as a 
starting point for compatible applications. Given a savepoint, an application 
can be updated or adapt its scale, or multiple versions of an application can 
be started for A/B testing.</p>
 
 <h3 id="what-are-typical-event-driven-applications">What are typical 
event-driven applications?</h3>
 
diff --git a/content/zh/usecases.html b/content/zh/usecases.html
index b0b1917..45dcd02 100644
--- a/content/zh/usecases.html
+++ b/content/zh/usecases.html
@@ -154,115 +154,114 @@
 
        <hr />
 
-<p>Apache Flink is an excellent choice to develop and run many different types 
of applications due to its extensive features set. Flink’s features include 
support for stream and batch processing, sophisticated state management, 
event-time processing semantics, and exactly-once consistency guarantees for 
state. Moreover, Flink can be deployed on various resource providers such as 
YARN, Apache Mesos, and Kubernetes but also as stand-alone cluster on 
bare-metal hardware. Configured for high [...]
+<p>Apache Flink 
功能强大,支持开发和运行多种不同种类的应用程序。它的主要特性包括:批流一体化、精密的状态管理、事件时间支持以及精确一次的状态一致性保障等。Flink 
不仅可以运行在包括 YARN、 Mesos、Kubernetes 
在内的多种资源管理框架上,还支持在裸机集群上独立部署。在启用高可用选项的情况下,它不存在单点失效问题。事实证明,Flink 
已经可以扩展到数千核心,其状态可以达到 TB 级别,且仍能保持高吞吐、低延迟的特性。世界各地有很多要求严苛的流处理应用都运行在 Flink 之上。</p>
 
-<p>Below, we explore the most common types of applications that are powered by 
Flink and give pointers to real-world examples.</p>
+<p>接下来我们将介绍 Flink 常见的几类应用并给出相关实例链接。</p>
 
 <ul>
-  <li><a href="#eventDrivenApps">Event-driven Applications</a></li>
-  <li><a href="#analytics">Data Analytics Applications</a></li>
-  <li><a href="#pipelines">Data Pipeline Applications</a></li>
+  <li><a href="#eventDrivenApps">事件驱动型应用</a></li>
+  <li><a href="#analytics">数据分析应用</a></li>
+  <li><a href="#pipelines">数据管道应用</a></li>
 </ul>
 
-<h2 id="event-driven-applications-a-nameeventdrivenappsa">Event-driven 
Applications <a name="eventDrivenApps"></a></h2>
+<h2 id="a-nameeventdrivenappsa">事件驱动型应用 <a name="eventDrivenApps"></a></h2>
 
-<h3 id="what-are-event-driven-applications">What are event-driven 
applications?</h3>
+<h3 id="section">什么是事件驱动型应用?</h3>
 
-<p>An event-driven application is a stateful application that ingest events 
from one or more event streams and reacts to incoming events by triggering 
computations, state updates, or external actions.</p>
+<p>事件驱动型应用是一类具有状态的应用,它从一个或多个事件流提取数据,并根据到来的事件触发计算、状态更新或其他外部动作。</p>
 
-<p>Event-driven applications are an evolution of the traditional application 
design with separated compute and data storage tiers. In this architecture, 
applications read data from and persist data to a remote transactional 
database.</p>
+<p>事件驱动型应用是在计算存储分离的传统应用基础上进化而来。在传统架构中,应用需要读写远程事务型数据库。</p>
 
-<p>In contrast, event-driven applications are based on stateful stream 
processing applications. In this design, data and computation are co-located, 
which yields local (in-memory or disk) data access. Fault-tolerance is achieved 
by periodically writing checkpoints to a remote persistent storage. The figure 
below depicts the difference between the traditional application architecture 
and event-driven applications.</p>
+<p>相反,事件驱动型应用是基于状态化流处理来完成。在该设计中,数据和计算不会分离,应用只需访问本地(内存或磁盘)即可获取数据。系统容错性的实现依赖于定期向远程持久化存储写入
 checkpoint。下图描述了传统应用和事件驱动型应用架构的区别。</p>
 
 <p><br /></p>
 <div class="row front-graphic">
   <img src="/img/usecases-eventdrivenapps.png" width="700px" />
 </div>
 
-<h3 id="what-are-the-advantages-of-event-driven-applications">What are the 
advantages of event-driven applications?</h3>
+<h3 id="section-1">事件驱动型应用的优势?</h3>
 
-<p>Instead of querying a remote database, event-driven applications access 
their data locally which yields better performance, both in terms of throughput 
and latency. The periodic checkpoints to a remote persistent storage can be 
asynchronously and incrementally done. Hence, the impact of checkpointing on 
the regular event processing is very small. However, the event-driven 
application design provides more benefits than just local data access. In the 
tiered architecture, it is common th [...]
+<p>事件驱动型应用无须查询远程数据库,本地数据访问使得它具有更高的吞吐和更低的延迟。而由于定期向远程持久化存储的 checkpoint 
工作可以异步、增量式完成,因此对于正常事件处理的影响甚微。事件驱动型应用的优势不仅限于本地数据访问。传统分层架构下,通常多个应用会共享同一个数据库,因而任何对数据库自身的更改(例如:由应用更新或服务扩容导致数据布局发生改变)都需要谨慎协调。反观事件驱动型应用,由于只需考虑自身数据,因此在更改数据表示或服务扩容时所需的协调工作将大大减少。</p>
 
-<h3 id="how-does-flink-support-event-driven-applications">How does Flink 
support event-driven applications?</h3>
+<h3 id="flink-">Flink 如何支持事件驱动型应用?</h3>
 
-<p>The limits of event-driven applications are defined by how well a stream 
processor can handle time and state. Many of Flink’s outstanding features are 
centered around these concepts. Flink provides a rich set of state primitives 
that can manage very large data volumes (up to several terabytes) with 
exactly-once consistency guarantees. Moreover, Flink’s support for event-time, 
highly customizable window logic, and fine-grained control of time as provided 
by the <code>ProcessFunction</c [...]
+<p>事件驱动型应用会受制于底层流处理系统对时间和状态的把控能力,Flink 
诸多优秀特质都是围绕这些方面来设计的。它提供了一系列丰富的状态操作原语,允许以精确一次的一致性语义合并海量规模(TB 级别)的状态数据。此外,Flink 
还支持事件时间和自由度极高的定制化窗口逻辑,而且它内置的 <code>ProcessFunction</code> 
支持细粒度时间控制,方便实现一些高级业务逻辑。同时,Flink 还拥有一个复杂事件处理(CEP)类库,可以用来检测数据流中的模式。</p>
 
-<p>However, Flink’s outstanding feature for event-driven applications are 
savepoints. A savepoint a consistent state image that can be used as a starting 
point for compatible applications. Given a savepoint, an application can be 
updated or adapt its scale, or multiple versions of an application can be 
started for A/B testing.</p>
+<p>Flink 中针对事件驱动应用的明星特性当属 savepoint。Savepoint 
是一个一致性的状态映像,它可以用来初始化任意状态兼容的应用。在完成一次 savepoint 后,即可放心对应用升级或扩容,还可以启动多个版本的应用来完成 
A/B 测试。</p>
 
-<h3 id="what-are-typical-event-driven-applications">What are typical 
event-driven applications?</h3>
+<h3 id="section-2">典型的事件驱动型应用实例</h3>
 
 <ul>
-  <li><a 
href="https://sf-2017.flink-forward.org/kb_sessions/streaming-models-how-ing-adds-models-at-runtime-to-catch-fraudsters/";>Fraud
 detection</a></li>
-  <li><a 
href="https://sf-2017.flink-forward.org/kb_sessions/building-a-real-time-anomaly-detection-system-with-flink-mux/";>Anomaly
 detection</a></li>
-  <li><a 
href="https://sf-2017.flink-forward.org/kb_sessions/dynamically-configured-stream-processing-using-flink-kafka/";>Rule-based
 alerting</a></li>
-  <li><a 
href="https://jobs.zalando.com/tech/blog/complex-event-generation-for-business-process-monitoring-using-apache-flink/";>Business
 process monitoring</a></li>
-  <li><a 
href="https://berlin-2017.flink-forward.org/kb_sessions/drivetribes-kappa-architecture-with-apache-flink/";>Web
 application (social network)</a></li>
+  <li><a 
href="https://sf-2017.flink-forward.org/kb_sessions/streaming-models-how-ing-adds-models-at-runtime-to-catch-fraudsters/";>反欺诈</a></li>
+  <li><a 
href="https://sf-2017.flink-forward.org/kb_sessions/building-a-real-time-anomaly-detection-system-with-flink-mux/";>异常检测</a></li>
+  <li><a 
href="https://sf-2017.flink-forward.org/kb_sessions/dynamically-configured-stream-processing-using-flink-kafka/";>基于规则的报警</a></li>
+  <li><a 
href="https://jobs.zalando.com/tech/blog/complex-event-generation-for-business-process-monitoring-using-apache-flink/";>业务流程监控</a></li>
+  <li><a 
href="https://berlin-2017.flink-forward.org/kb_sessions/drivetribes-kappa-architecture-with-apache-flink/";>(社交网络)Web
 应用</a></li>
 </ul>
 
-<h2 id="data-analytics-applicationsa-nameanalyticsa">Data Analytics 
Applications<a name="analytics"></a></h2>
+<h2 id="a-nameanalyticsa">数据分析应用<a name="analytics"></a></h2>
 
-<h3 id="what-are-data-analytics-applications">What are data analytics 
applications?</h3>
+<h3 id="section-3">什么是数据分析应用?</h3>
 
-<p>Analytical jobs extract information and insight from raw data. 
Traditionally, analytics are performed as batch queries or applications on 
bounded data sets of recorded events. In order to incorporate the latest data 
into the result of the analysis, it has to be added to the analyzed data set 
and the query or application is rerun. The results are written to a storage 
system or emitted as reports.</p>
+<p>数据分析任务需要从原始数据中提取有价值的信息和指标。传统的分析方式通常是利用批查询,或将事件记录下来并基于此有限数据集构建应用来完成。为了得到最新数据的分析结果,必须先将它们加入分析数据集并重新执行查询或运行应用,随后将结果写入存储系统或生成报告。</p>
 
-<p>With a sophisticated stream processing engine, analytics can also be 
performed in a real-time fashion. Instead of reading finite data sets, 
streaming queries or applications ingest real-time event streams and 
continuously produce and update results as events are consumed. The results are 
either written to an external database or maintained as internal state. 
Dashboard application can read the latest results from the external database or 
directly query the internal state of the applica [...]
+<p>借助一些先进的流处理引擎,还可以实时地进行数据分析。和传统模式下读取有限数据集不同,流式查询或应用会接入实时事件流,并随着事件消费持续产生和更新结果。这些结果数据可能会写入外部数据库系统或以内部状态的形式维护。仪表展示应用可以相应地从外部数据库读取数据或直接查询应用的内部状态。</p>
 
-<p>Apache Flink supports streaming as well as batch analytical applications as 
shown in the figure below.</p>
+<p>如下图所示,Apache Flink 同时支持流式及批量分析应用。</p>
 
 <div class="row front-graphic">
   <img src="/img/usecases-analytics.png" width="700px" />
 </div>
 
-<h3 id="what-are-the-advantages-of-streaming-analytics-applications">What are 
the advantages of streaming analytics applications?</h3>
+<h3 id="section-4">流式分析应用的优势?</h3>
 
-<p>The advantages of continuous streaming analytics compared to batch 
analytics are not limited to a much lower latency from events to insight due to 
elimination of periodic import and query execution. In contrast to batch 
queries, streaming queries do not have to deal with artificial boundaries in 
the input data which are caused by periodic imports and the bounded nature of 
the input.</p>
+<p>和批量分析相比,由于流式分析省掉了周期性的数据导入和查询过程,因此从事件中获取指标的延迟更低。不仅如此,批量查询必须处理那些由定期导入和输入有界性导致的人工数据边界,而流式查询则无须考虑该问题。</p>
 
-<p>Another aspect is a simpler application architecture. A batch analytics 
pipeline consist of several independent components to periodically schedule 
data ingestion and query execution. Reliably operating such a pipeline is 
non-trivial because failures of one component affect the following steps of the 
pipeline. In contrast, a streaming analytics application which runs on a 
sophisticated stream processor like Flink incorporates all steps from data 
ingestions to continuous result computa [...]
+<p>另一方面,流式分析会简化应用抽象。批量查询的流水线通常由多个独立部件组成,需要周期性地调度提取数据和执行查询。如此复杂的流水线操作起来并不容易,一旦某个组件出错将会影响流水线的后续步骤。而流式分析应用整体运行在
 Flink 之类的高端流处理系统之上,涵盖了从数据接入到连续结果计算的所有步骤,因此可以依赖底层引擎提供的故障恢复机制。</p>
 
-<h3 id="how-does-flink-support-data-analytics-applications">How does Flink 
support data analytics applications?</h3>
+<h3 id="flink--1">Flink 如何支持数据分析类应用?</h3>
 
-<p>Flink provides very good support for continuous streaming as well as batch 
analytics. Specifically, it features an ANSI-compliant SQL interface with 
unified semantics for batch and streaming queries. SQL queries compute the same 
result regardless whether they are run on a static data set of recorded events 
or on a real-time event stream. Rich support for user-defined functions ensures 
that custom code can be executed in SQL queries. If even more custom logic is 
required, Flink’s DataS [...]
+<p>Flink 为持续流式分析和批量分析都提供了良好的支持。具体而言,它内置了一个符合 ANSI 标准的 SQL 
接口,将批、流查询的语义统一起来。无论是在记录事件的静态数据集上还是实时事件流上,相同 SQL 查询都会得到一致的结果。同时 Flink 
还支持丰富的用户自定义函数,允许在 SQL 中执行定制化代码。如果还需进一步定制逻辑,可以利用 Flink DataStream API 和 DataSet 
API 进行更低层次的控制。此外,Flink 的 Gelly 库为基于批量数据集的大规模高性能图分析提供了算法和构建模块支持。</p>
 
-<h3 id="what-are-typical-data-analytics-applications">What are typical data 
analytics applications?</h3>
+<h3 id="section-5">典型的数据分析应用实例</h3>
 
 <ul>
-  <li><a 
href="http://2016.flink-forward.org/kb_sessions/a-brief-history-of-time-with-apache-flink-real-time-monitoring-and-analysis-with-flink-kafka-hb/";>Quality
 monitoring of Telco networks</a></li>
-  <li><a 
href="https://techblog.king.com/rbea-scalable-real-time-analytics-king/";>Analysis
 of product updates &amp; experiment evaluation</a> in mobile applications</li>
-  <li><a href="https://eng.uber.com/athenax/";>Ad-hoc analysis of live data</a> 
in consumer technology</li>
-  <li>Large-scale graph analysis</li>
+  <li><a 
href="http://2016.flink-forward.org/kb_sessions/a-brief-history-of-time-with-apache-flink-real-time-monitoring-and-analysis-with-flink-kafka-hb/";>电信网络质量监控</a></li>
+  <li>移动应用中的<a 
href="https://techblog.king.com/rbea-scalable-real-time-analytics-king/";>产品更新及实验评估分析</a></li>
+  <li>消费者技术中的<a href="https://eng.uber.com/athenax/";>实时数据即席分析</a></li>
+  <li>大规模图分析</li>
 </ul>
 
-<h2 id="data-pipeline-applications-a-namepipelinesa">Data Pipeline 
Applications <a name="pipelines"></a></h2>
+<h2 id="a-namepipelinesa">数据管道应用 <a name="pipelines"></a></h2>
 
-<h3 id="what-are-data-pipelines">What are data pipelines?</h3>
+<h3 id="section-6">什么是数据管道?</h3>
 
-<p>Extract-transform-load (ETL) is a common approach to convert and move data 
between storage systems. Often ETL jobs are periodically triggered to copy data 
from from transactional database systems to an analytical database or a data 
warehouse.</p>
+<p>提取-转换-加载(ETL)是一种在存储系统之间进行数据转换和迁移的常用方法。ETL 
作业通常会周期性地触发,将数据从事务型数据库拷贝到分析型数据库或数据仓库。</p>
 
-<p>Data pipelines serve a similar purpose as ETL jobs. They transform and 
enrich data and can move it from one storage system to another. However, they 
operate in a continuous streaming mode instead of being periodically triggered. 
Hence, they are able to read records from sources that continuously produce 
data and move it with low latency to their destination. For example a data 
pipeline might monitor a file system directory for new files and write their 
data into an event log. Another  [...]
+<p>数据管道和 ETL 
作业的用途相似,都可以转换、丰富数据,并将其从某个存储系统移动到另一个。但数据管道是以持续流模式运行,而非周期性触发。因此它支持从一个不断生成数据的源头读取记录,并将它们以低延迟移动到终点。例如:数据管道可以用来监控文件系统目录中的新文件,并将其数据写入事件日志;另一个应用可能会将事件流物化到数据库或增量构建和优化查询索引。</p>
 
-<p>The figure below depicts the difference between periodic ETL jobs and 
continuous data pipelines.</p>
+<p>下图描述了周期性 ETL 作业和持续数据管道的差异。</p>
 
 <div class="row front-graphic">
   <img src="/img/usecases-datapipelines.png" width="700px" />
 </div>
 
-<h3 id="what-are-the-advantages-of-data-pipelines">What are the advantages of 
data pipelines?</h3>
+<h3 id="section-7">数据管道的优势?</h3>
 
-<p>The obvious advantage of continuous data pipelines over periodic ETL jobs 
is the reduced latency of moving data to its destination. Moreover, data 
pipelines are more versatile and can be employed for more use cases because 
they are able to continuously consume and emit data.</p>
+<p>和周期性 ETL 作业相比,持续数据管道可以明显降低将数据移动到目的端的延迟。此外,由于它能够持续消费和发送数据,因此用途更广,支持用例更多。</p>
 
-<h3 id="how-does-flink-support-data-pipelines">How does Flink support data 
pipelines?</h3>
+<h3 id="flink--2">Flink 如何支持数据管道应用?</h3>
 
-<p>Many common data transformation or enrichment tasks can be addressed by 
Flink’s SQL interface (or Table API) and its support for user-defined 
functions. Data pipelines with more advanced requirements can be realized by 
using the DataStream API which is more generic. Flink provides a rich set of 
connectors to various storage systems such as Kafka, Kinesis, Elasticsearch, 
and JDBC database systems. It also features continuous sources for file systems 
that monitor directories and sinks t [...]
+<p>很多常见的数据转换和增强操作可以利用 Flink 的 SQL 接口(或 Table 
API)及用户自定义函数解决。如果数据管道有更高级的需求,可以选择更通用的 DataStream API 来实现。Flink 
为多种数据存储系统(如:Kafka、Kinesis、Elasticsearch、JDBC数据库系统等)内置了连接器。同时它还提供了文件系统的连续型数据源及数据汇,可用来监控目录变化和以时间分区的方式写入文件。</p>
 
-<h3 id="what-are-typical-data-pipeline-applications">What are typical data 
pipeline applications?</h3>
+<h3 id="section-8">典型的数据管道应用实例</h3>
 
 <ul>
-  <li><a 
href="https://data-artisans.com/blog/blink-flink-alibaba-search";>Real-time 
search index building</a> in e-commerce</li>
-  <li><a 
href="https://jobs.zalando.com/tech/blog/apache-showdown-flink-vs.-spark/";>Continuous
 ETL</a> in e-commerce</li>
+  <li>电子商务中的<a 
href="https://data-artisans.com/blog/blink-flink-alibaba-search";>实时查询索引构建</a></li>
+  <li>电子商务中的<a 
href="https://jobs.zalando.com/tech/blog/apache-showdown-flink-vs.-spark/";>持续 
ETL</a></li>
 </ul>
 
 
-
   </div>
 </div>
 
diff --git a/usecases.md b/usecases.md
index fc41c7d..141de31 100644
--- a/usecases.md
+++ b/usecases.md
@@ -35,7 +35,7 @@ Instead of querying a remote database, event-driven 
applications access their da
 
 The limits of event-driven applications are defined by how well a stream 
processor can handle time and state. Many of Flink's outstanding features are 
centered around these concepts. Flink provides a rich set of state primitives 
that can manage very large data volumes (up to several terabytes) with 
exactly-once consistency guarantees. Moreover, Flink's support for event-time, 
highly customizable window logic, and fine-grained control of time as provided 
by the `ProcessFunction` enable th [...]
 
-However, Flink's outstanding feature for event-driven applications are 
savepoints. A savepoint a consistent state image that can be used as a starting 
point for compatible applications. Given a savepoint, an application can be 
updated or adapt its scale, or multiple versions of an application can be 
started for A/B testing.
+However, Flink's outstanding feature for event-driven applications is 
savepoint. A savepoint is a consistent state image that can be used as a 
starting point for compatible applications. Given a savepoint, an application 
can be updated or adapt its scale, or multiple versions of an application can 
be started for A/B testing.
 
 ### What are typical event-driven applications?
 
diff --git a/usecases.zh.md b/usecases.zh.md
index e1daa8a..9475f31 100644
--- a/usecases.zh.md
+++ b/usecases.zh.md
@@ -4,102 +4,101 @@ title: "应用场景"
 
 <hr />
 
-Apache Flink is an excellent choice to develop and run many different types of 
applications due to its extensive features set. Flink's features include 
support for stream and batch processing, sophisticated state management, 
event-time processing semantics, and exactly-once consistency guarantees for 
state. Moreover, Flink can be deployed on various resource providers such as 
YARN, Apache Mesos, and Kubernetes but also as stand-alone cluster on 
bare-metal hardware. Configured for high av [...]
+Apache Flink 
功能强大,支持开发和运行多种不同种类的应用程序。它的主要特性包括:批流一体化、精密的状态管理、事件时间支持以及精确一次的状态一致性保障等。Flink 
不仅可以运行在包括 YARN、 Mesos、Kubernetes 
在内的多种资源管理框架上,还支持在裸机集群上独立部署。在启用高可用选项的情况下,它不存在单点失效问题。事实证明,Flink 
已经可以扩展到数千核心,其状态可以达到 TB 级别,且仍能保持高吞吐、低延迟的特性。世界各地有很多要求严苛的流处理应用都运行在 Flink 之上。
 
-Below, we explore the most common types of applications that are powered by 
Flink and give pointers to real-world examples.
+接下来我们将介绍 Flink 常见的几类应用并给出相关实例链接。
 
-* <a href="#eventDrivenApps">Event-driven Applications</a>
-* <a href="#analytics">Data Analytics Applications</a>
-* <a href="#pipelines">Data Pipeline Applications</a>
+* <a href="#eventDrivenApps">事件驱动型应用</a>
+* <a href="#analytics">数据分析应用</a>
+* <a href="#pipelines">数据管道应用</a>
   
-## Event-driven Applications <a name="eventDrivenApps"></a>
+## 事件驱动型应用 <a name="eventDrivenApps"></a>
 
-### What are event-driven applications?
+### 什么是事件驱动型应用?
 
-An event-driven application is a stateful application that ingest events from 
one or more event streams and reacts to incoming events by triggering 
computations, state updates, or external actions.
+事件驱动型应用是一类具有状态的应用,它从一个或多个事件流提取数据,并根据到来的事件触发计算、状态更新或其他外部动作。
 
-Event-driven applications are an evolution of the traditional application 
design with separated compute and data storage tiers. In this architecture, 
applications read data from and persist data to a remote transactional database.
+事件驱动型应用是在计算存储分离的传统应用基础上进化而来。在传统架构中,应用需要读写远程事务型数据库。
 
-In contrast, event-driven applications are based on stateful stream processing 
applications. In this design, data and computation are co-located, which yields 
local (in-memory or disk) data access. Fault-tolerance is achieved by 
periodically writing checkpoints to a remote persistent storage. The figure 
below depicts the difference between the traditional application architecture 
and event-driven applications.
+相反,事件驱动型应用是基于状态化流处理来完成。在该设计中,数据和计算不会分离,应用只需访问本地(内存或磁盘)即可获取数据。系统容错性的实现依赖于定期向远程持久化存储写入
 checkpoint。下图描述了传统应用和事件驱动型应用架构的区别。
 
 <br>
 <div class="row front-graphic">
   <img src="{{ site.baseurl }}/img/usecases-eventdrivenapps.png" width="700px" 
/>
 </div>
 
-### What are the advantages of event-driven applications?
+### 事件驱动型应用的优势?
 
-Instead of querying a remote database, event-driven applications access their 
data locally which yields better performance, both in terms of throughput and 
latency. The periodic checkpoints to a remote persistent storage can be 
asynchronously and incrementally done. Hence, the impact of checkpointing on 
the regular event processing is very small. However, the event-driven 
application design provides more benefits than just local data access. In the 
tiered architecture, it is common that  [...]
+事件驱动型应用无须查询远程数据库,本地数据访问使得它具有更高的吞吐和更低的延迟。而由于定期向远程持久化存储的 checkpoint 
工作可以异步、增量式完成,因此对于正常事件处理的影响甚微。事件驱动型应用的优势不仅限于本地数据访问。传统分层架构下,通常多个应用会共享同一个数据库,因而任何对数据库自身的更改(例如:由应用更新或服务扩容导致数据布局发生改变)都需要谨慎协调。反观事件驱动型应用,由于只需考虑自身数据,因此在更改数据表示或服务扩容时所需的协调工作将大大减少。
 
-### How does Flink support event-driven applications?
+### Flink 如何支持事件驱动型应用?
 
-The limits of event-driven applications are defined by how well a stream 
processor can handle time and state. Many of Flink's outstanding features are 
centered around these concepts. Flink provides a rich set of state primitives 
that can manage very large data volumes (up to several terabytes) with 
exactly-once consistency guarantees. Moreover, Flink's support for event-time, 
highly customizable window logic, and fine-grained control of time as provided 
by the `ProcessFunction` enable th [...]
+事件驱动型应用会受制于底层流处理系统对时间和状态的把控能力,Flink 
诸多优秀特质都是围绕这些方面来设计的。它提供了一系列丰富的状态操作原语,允许以精确一次的一致性语义合并海量规模(TB 级别)的状态数据。此外,Flink 
还支持事件时间和自由度极高的定制化窗口逻辑,而且它内置的 `ProcessFunction` 支持细粒度时间控制,方便实现一些高级业务逻辑。同时,Flink 
还拥有一个复杂事件处理(CEP)类库,可以用来检测数据流中的模式。
 
-However, Flink's outstanding feature for event-driven applications are 
savepoints. A savepoint a consistent state image that can be used as a starting 
point for compatible applications. Given a savepoint, an application can be 
updated or adapt its scale, or multiple versions of an application can be 
started for A/B testing.
+Flink 中针对事件驱动应用的明星特性当属 savepoint。Savepoint 是一个一致性的状态映像,它可以用来初始化任意状态兼容的应用。在完成一次 
savepoint 后,即可放心对应用升级或扩容,还可以启动多个版本的应用来完成 A/B 测试。
 
-### What are typical event-driven applications?
+### 典型的事件驱动型应用实例
 
-* <a 
href="https://sf-2017.flink-forward.org/kb_sessions/streaming-models-how-ing-adds-models-at-runtime-to-catch-fraudsters/";>Fraud
 detection</a>
-* <a 
href="https://sf-2017.flink-forward.org/kb_sessions/building-a-real-time-anomaly-detection-system-with-flink-mux/";>Anomaly
 detection</a>
-* <a 
href="https://sf-2017.flink-forward.org/kb_sessions/dynamically-configured-stream-processing-using-flink-kafka/";>Rule-based
 alerting</a> 
-* <a 
href="https://jobs.zalando.com/tech/blog/complex-event-generation-for-business-process-monitoring-using-apache-flink/";>Business
 process monitoring</a>
-* <a 
href="https://berlin-2017.flink-forward.org/kb_sessions/drivetribes-kappa-architecture-with-apache-flink/";>Web
 application (social network)</a>
+* <a 
href="https://sf-2017.flink-forward.org/kb_sessions/streaming-models-how-ing-adds-models-at-runtime-to-catch-fraudsters/";>反欺诈</a>
+* <a 
href="https://sf-2017.flink-forward.org/kb_sessions/building-a-real-time-anomaly-detection-system-with-flink-mux/";>异常检测</a>
+* <a 
href="https://sf-2017.flink-forward.org/kb_sessions/dynamically-configured-stream-processing-using-flink-kafka/";>基于规则的报警</a>
+* <a 
href="https://jobs.zalando.com/tech/blog/complex-event-generation-for-business-process-monitoring-using-apache-flink/";>业务流程监控</a>
+* <a 
href="https://berlin-2017.flink-forward.org/kb_sessions/drivetribes-kappa-architecture-with-apache-flink/";>(社交网络)Web
 应用</a>
 
-## Data Analytics Applications<a name="analytics"></a>
+## 数据分析应用<a name="analytics"></a>
 
-### What are data analytics applications?
+### 什么是数据分析应用?
 
-Analytical jobs extract information and insight from raw data. Traditionally, 
analytics are performed as batch queries or applications on bounded data sets 
of recorded events. In order to incorporate the latest data into the result of 
the analysis, it has to be added to the analyzed data set and the query or 
application is rerun. The results are written to a storage system or emitted as 
reports.
+数据分析任务需要从原始数据中提取有价值的信息和指标。传统的分析方式通常是利用批查询,或将事件记录下来并基于此有限数据集构建应用来完成。为了得到最新数据的分析结果,必须先将它们加入分析数据集并重新执行查询或运行应用,随后将结果写入存储系统或生成报告。
 
-With a sophisticated stream processing engine, analytics can also be performed 
in a real-time fashion. Instead of reading finite data sets, streaming queries 
or applications ingest real-time event streams and continuously produce and 
update results as events are consumed. The results are either written to an 
external database or maintained as internal state. Dashboard application can 
read the latest results from the external database or directly query the 
internal state of the application.
+借助一些先进的流处理引擎,还可以实时地进行数据分析。和传统模式下读取有限数据集不同,流式查询或应用会接入实时事件流,并随着事件消费持续产生和更新结果。这些结果数据可能会写入外部数据库系统或以内部状态的形式维护。仪表展示应用可以相应地从外部数据库读取数据或直接查询应用的内部状态。
 
-Apache Flink supports streaming as well as batch analytical applications as 
shown in the figure below.
+如下图所示,Apache Flink 同时支持流式及批量分析应用。
 
 <div class="row front-graphic">
   <img src="{{ site.baseurl }}/img/usecases-analytics.png" width="700px" />
 </div>
 
-### What are the advantages of streaming analytics applications?
+### 流式分析应用的优势?
 
-The advantages of continuous streaming analytics compared to batch analytics 
are not limited to a much lower latency from events to insight due to 
elimination of periodic import and query execution. In contrast to batch 
queries, streaming queries do not have to deal with artificial boundaries in 
the input data which are caused by periodic imports and the bounded nature of 
the input. 
+和批量分析相比,由于流式分析省掉了周期性的数据导入和查询过程,因此从事件中获取指标的延迟更低。不仅如此,批量查询必须处理那些由定期导入和输入有界性导致的人工数据边界,而流式查询则无须考虑该问题。
 
-Another aspect is a simpler application architecture. A batch analytics 
pipeline consist of several independent components to periodically schedule 
data ingestion and query execution. Reliably operating such a pipeline is 
non-trivial because failures of one component affect the following steps of the 
pipeline. In contrast, a streaming analytics application which runs on a 
sophisticated stream processor like Flink incorporates all steps from data 
ingestions to continuous result computatio [...]
+另一方面,流式分析会简化应用抽象。批量查询的流水线通常由多个独立部件组成,需要周期性地调度提取数据和执行查询。如此复杂的流水线操作起来并不容易,一旦某个组件出错将会影响流水线的后续步骤。而流式分析应用整体运行在
 Flink 之类的高端流处理系统之上,涵盖了从数据接入到连续结果计算的所有步骤,因此可以依赖底层引擎提供的故障恢复机制。
 
-### How does Flink support data analytics applications?
+### Flink 如何支持数据分析类应用?
 
-Flink provides very good support for continuous streaming as well as batch 
analytics. Specifically, it features an ANSI-compliant SQL interface with 
unified semantics for batch and streaming queries. SQL queries compute the same 
result regardless whether they are run on a static data set of recorded events 
or on a real-time event stream. Rich support for user-defined functions ensures 
that custom code can be executed in SQL queries. If even more custom logic is 
required, Flink's DataStre [...]
+Flink 为持续流式分析和批量分析都提供了良好的支持。具体而言,它内置了一个符合 ANSI 标准的 SQL 
接口,将批、流查询的语义统一起来。无论是在记录事件的静态数据集上还是实时事件流上,相同 SQL 查询都会得到一致的结果。同时 Flink 
还支持丰富的用户自定义函数,允许在 SQL 中执行定制化代码。如果还需进一步定制逻辑,可以利用 Flink DataStream API 和 DataSet 
API 进行更低层次的控制。此外,Flink 的 Gelly 库为基于批量数据集的大规模高性能图分析提供了算法和构建模块支持。
 
-### What are typical data analytics applications?
+### 典型的数据分析应用实例
 
-* <a 
href="http://2016.flink-forward.org/kb_sessions/a-brief-history-of-time-with-apache-flink-real-time-monitoring-and-analysis-with-flink-kafka-hb/";>Quality
 monitoring of Telco networks</a>
-* <a 
href="https://techblog.king.com/rbea-scalable-real-time-analytics-king/";>Analysis
 of product updates &amp; experiment evaluation</a> in mobile applications
-* <a href="https://eng.uber.com/athenax/";>Ad-hoc analysis of live data</a> in 
consumer technology
-* Large-scale graph analysis
+* <a 
href="http://2016.flink-forward.org/kb_sessions/a-brief-history-of-time-with-apache-flink-real-time-monitoring-and-analysis-with-flink-kafka-hb/";>电信网络质量监控</a>
+* 移动应用中的<a 
href="https://techblog.king.com/rbea-scalable-real-time-analytics-king/";>产品更新及实验评估分析</a>
+* 消费者技术中的<a href="https://eng.uber.com/athenax/";>实时数据即席分析</a>
+* 大规模图分析
 
-## Data Pipeline Applications <a name="pipelines"></a>
+## 数据管道应用 <a name="pipelines"></a>
 
-### What are data pipelines?
+### 什么是数据管道?
 
-Extract-transform-load (ETL) is a common approach to convert and move data 
between storage systems. Often ETL jobs are periodically triggered to copy data 
from from transactional database systems to an analytical database or a data 
warehouse. 
+提取-转换-加载(ETL)是一种在存储系统之间进行数据转换和迁移的常用方法。ETL 作业通常会周期性地触发,将数据从事务型数据库拷贝到分析型数据库或数据仓库。
 
-Data pipelines serve a similar purpose as ETL jobs. They transform and enrich 
data and can move it from one storage system to another. However, they operate 
in a continuous streaming mode instead of being periodically triggered. Hence, 
they are able to read records from sources that continuously produce data and 
move it with low latency to their destination. For example a data pipeline 
might monitor a file system directory for new files and write their data into 
an event log. Another app [...]
+数据管道和 ETL 
作业的用途相似,都可以转换、丰富数据,并将其从某个存储系统移动到另一个。但数据管道是以持续流模式运行,而非周期性触发。因此它支持从一个不断生成数据的源头读取记录,并将它们以低延迟移动到终点。例如:数据管道可以用来监控文件系统目录中的新文件,并将其数据写入事件日志;另一个应用可能会将事件流物化到数据库或增量构建和优化查询索引。
 
-The figure below depicts the difference between periodic ETL jobs and 
continuous data pipelines.
+下图描述了周期性 ETL 作业和持续数据管道的差异。
 
 <div class="row front-graphic">
   <img src="{{ site.baseurl }}/img/usecases-datapipelines.png" width="700px" />
 </div>
 
-### What are the advantages of data pipelines?
+### 数据管道的优势?
 
-The obvious advantage of continuous data pipelines over periodic ETL jobs is 
the reduced latency of moving data to its destination. Moreover, data pipelines 
are more versatile and can be employed for more use cases because they are able 
to continuously consume and emit data. 
+和周期性 ETL 作业相比,持续数据管道可以明显降低将数据移动到目的端的延迟。此外,由于它能够持续消费和发送数据,因此用途更广,支持用例更多。
 
-### How does Flink support data pipelines?
+### Flink 如何支持数据管道应用?
 
-Many common data transformation or enrichment tasks can be addressed by 
Flink's SQL interface (or Table API) and its support for user-defined 
functions. Data pipelines with more advanced requirements can be realized by 
using the DataStream API which is more generic. Flink provides a rich set of 
connectors to various storage systems such as Kafka, Kinesis, Elasticsearch, 
and JDBC database systems. It also features continuous sources for file systems 
that monitor directories and sinks that [...]
+很多常见的数据转换和增强操作可以利用 Flink 的 SQL 接口(或 Table 
API)及用户自定义函数解决。如果数据管道有更高级的需求,可以选择更通用的 DataStream API 来实现。Flink 
为多种数据存储系统(如:Kafka、Kinesis、Elasticsearch、JDBC数据库系统等)内置了连接器。同时它还提供了文件系统的连续型数据源及数据汇,可用来监控目录变化和以时间分区的方式写入文件。
 
-### What are typical data pipeline applications?
-
-* <a 
href="https://data-artisans.com/blog/blink-flink-alibaba-search";>Real-time 
search index building</a> in e-commerce
-* <a 
href="https://jobs.zalando.com/tech/blog/apache-showdown-flink-vs.-spark/";>Continuous
 ETL</a> in e-commerce 
+### 典型的数据管道应用实例
 
+* 电子商务中的<a 
href="https://data-artisans.com/blog/blink-flink-alibaba-search";>实时查询索引构建</a>
+* 电子商务中的<a 
href="https://jobs.zalando.com/tech/blog/apache-showdown-flink-vs.-spark/";>持续 
ETL</a>

Reply via email to