fsk119 commented on a change in pull request #14437:
URL: https://github.com/apache/flink/pull/14437#discussion_r549073093



##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -22,54 +22,54 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink SQL makes it simple to develop streaming applications using standard 
SQL. It is easy to learn Flink if you have ever worked with a database or SQL 
like system by remaining ANSI-SQL 2011 compliant. This tutorial will help you 
get started quickly with a Flink SQL development environment. 
+Flink SQL 使得使用标准 SQL 开发流应用程序变的简单。如果你曾经在工作中使用过兼容 ANSI-SQL 2011 的数据库或类似的 SQL 
系统,那么就很容易学习 Flink。本教程将帮助你快速入门 Flink SQL 开发环境。
  
 * This will be replaced by the TOC
 {:toc}
 
 
-### Prerequisetes 
+### 先决条件
 
-You only need to have basic knowledge of SQL to follow along. No other 
programming experience is assumed. 
+你只需要具备 SQL 的基础知识即可,不需要其他编程经验。
 
-### Installation
+### 安装
 
-There are multiple ways to install Flink. For experimentation, the most common 
option is to download the binaries and run them locally. You can follow the 
steps in [local installation]({%link try-flink/local_installation.zh.md %}) to 
set up an environment for the rest of the tutorial. 
+安装 Flink 有多种方式。为了实验,最常见的选择是下载二进制包并在本地运行。你可以按照[本地模式安装]({% link 
try-flink/local_installation.zh.md %})中的步骤为本教程的剩余部分设置环境。
 
-Once you're all set, use the following command to start a local cluster from 
the installation folder:
+完成所有设置后,在安装文件夹中使用以下命令启动本地集群:
 
 {% highlight bash %}
 ./bin/start-cluster.sh
 {% endhighlight %}
  
-Once started, the Flink WebUI on [localhost:8081](localhost:8081) is available 
locally, from which you can monitor the different jobs.
+启动完成后,就可以在本地访问 Flink WebUI [localhost:8081](localhost:8081),你可以通过它来监视不同的作业。

Review comment:
       读起来似乎不太通顺? 或者 另起一句:通过它,你可以监控不同的作业。

##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -129,11 +128,11 @@ FROM employee_information
 GROUP BY dep_id;
  {% endhighlight %} 
 
-Such queries are considered _stateful_. Flink's advanced fault-tolerance 
mechanism will maintain internal state and consistency, so queries always 
return the correct result, even in the face of hardware failure. 
+这样的查询被认为是 _有状态的_。Flink 的高级容错机制将维持内部状态和一致性,因此即使遇到硬件故障,查询也始终返回正确结果。
 
-## Sink Tables
+## Sink 表
 
-When running this query, the SQL client provides output in real-time but in a 
read-only fashion. Storing results - to power a report or dashboard - requires 
writing out to another table. This can be achieved using an `INSERT INTO` 
statement. The table referenced in this clause is known as a sink table. An 
`INSERT INTO` statement will be submitted as a detached query to the Flink 
cluster. 
+当运行此查询时,SQL 客户端实时但是以只读方式提供输出。存储结果(为报表或仪表板提供数据来源)需要写到另一个表。这可以使用 `INSERT INTO` 
语句来实现。本节中引用的表称为 sink 表。`INSERT INTO` 语句将作为一个独立查询被提交到 Flink 集群中。

Review comment:
       "存储结果(为报表或仪表板提供数据来源)需要写到另一个表" -> "存储结果,作为报表或仪表板提供数据来源,需要写到另一个表" 

##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -22,54 +22,54 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink SQL makes it simple to develop streaming applications using standard 
SQL. It is easy to learn Flink if you have ever worked with a database or SQL 
like system by remaining ANSI-SQL 2011 compliant. This tutorial will help you 
get started quickly with a Flink SQL development environment. 
+Flink SQL 使得使用标准 SQL 开发流应用程序变的简单。如果你曾经在工作中使用过兼容 ANSI-SQL 2011 的数据库或类似的 SQL 
系统,那么就很容易学习 Flink。本教程将帮助你快速入门 Flink SQL 开发环境。

Review comment:
       get started quickly with a Flink SQL development environment -> 在 Flink 
SQL 开发环境下快速入门

##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -79,16 +79,15 @@ SELECT CURRENT_TIMESTAMP;
 
 {% top %}
 
-## Source Tables
+## Source 表
 
-As with all SQL engines, Flink queries operate on top of tables. 
-It differs from a traditional database because Flink does not manage data at 
rest locally; instead, its queries operate continuously over external tables. 
+与所有 SQL 引擎一样,Flink 查询在表上进行操作。与传统数据库不同,因为 Flink 不在本地管理静态数据;相反,它的查询在外部表上连续运行。
 
-Flink data processing pipelines begin with source tables. Source tables 
produce rows operated over during the query's execution; they are the tables 
referenced in the `FROM` clause of a query.  These could be Kafka topics, 
databases, filesystems, or any other system that Flink knows how to consume. 
+Flink 数据处理管道开始于 source 表。在查询执行期间,source 表产生操作的行;它们是查询时 `FROM` 子句中引用的表。这些表可能是 
Kafka 的 topics,数据库,文件系统,或 Flink 知道如何消费的任何其他系统。

Review comment:
       pipeline 在这里应该是流水线的意思。
   
   "Source tables produce rows operated over during the query's execution" -> 
Source 表产生在查询执行期间可以被操作的行 。
   
   “or any other system that Flink knows how to consume” -> 或者 任何其它 Flink 
知道如何消费的系统

##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -113,13 +112,13 @@ SELECT * from employee_information WHERE DeptId = 1;
 
 {% top %}
 
-## Continuous Queries
+## 连续查询
 
-While not designed initially with streaming semantics in mind, SQL is a 
powerful tool for building continuous data pipelines. Where Flink SQL differs 
from traditional database queries is that is continuously consuming rows as the 
arrives and produces updates to its results. 
+虽然最初设计时没有考虑流语义,但 SQL 是用于构建连续数据管道的强大工具。Flink SQL 与传统数据库查询的不同之处在于,Flink SQL 
持续消费到达的行并对其结果进行更新。
 
-A [continuous query]({% link dev/table/streaming/dynamic_tables.zh.md 
%}#continuous-queries) never terminates and produces a dynamic table as a 
result. [Dynamic tables]({% link dev/table/streaming/dynamic_tables.zh.md 
%}#continuous-queries) are the core concept of Flink's Table API and SQL 
support for streaming data. 
+一个[连续查询]({% link dev/table/streaming/dynamic_tables.zh.md 
%}#continuous-queries)永远不会终止,结果会生成一个动态表。[动态表]({% link 
dev/table/streaming/dynamic_tables.zh.md %}#continuous-queries)是 Flink 中 Table 
API 和 SQL 对流数据支持的核心概念。
 
-Aggregations on continuous streams need to store aggregated results 
continuously during the execution of the query. For example, suppose you need 
to count the number of employees for each department from an incoming data 
stream. The query needs to maintain the most up to date count for each 
department to output timely results as new rows are processed.
+连续流上的聚合需要在查询执行期间连续存储聚合结果。例如,假设你需要从传入的数据流中计算每个部门的员工人数。查询需要保持每个部门最新的计算总数,以便在处理新行时及时输出结果。

Review comment:
       “连续存储聚合结果” -> “不断地存储聚合的结果” ? 这样会不会更通顺点?
   
   “maintain” -> "维护" ?

##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -113,13 +112,13 @@ SELECT * from employee_information WHERE DeptId = 1;
 
 {% top %}
 
-## Continuous Queries
+## 连续查询
 
-While not designed initially with streaming semantics in mind, SQL is a 
powerful tool for building continuous data pipelines. Where Flink SQL differs 
from traditional database queries is that is continuously consuming rows as the 
arrives and produces updates to its results. 
+虽然最初设计时没有考虑流语义,但 SQL 是用于构建连续数据管道的强大工具。Flink SQL 与传统数据库查询的不同之处在于,Flink SQL 
持续消费到达的行并对其结果进行更新。
 
-A [continuous query]({% link dev/table/streaming/dynamic_tables.zh.md 
%}#continuous-queries) never terminates and produces a dynamic table as a 
result. [Dynamic tables]({% link dev/table/streaming/dynamic_tables.zh.md 
%}#continuous-queries) are the core concept of Flink's Table API and SQL 
support for streaming data. 
+一个[连续查询]({% link dev/table/streaming/dynamic_tables.zh.md 
%}#continuous-queries)永远不会终止,结果会生成一个动态表。[动态表]({% link 
dev/table/streaming/dynamic_tables.zh.md %}#continuous-queries)是 Flink 中 Table 
API 和 SQL 对流数据支持的核心概念。

Review comment:
       “结果会生成一个动态表” -> “并会产生一个动态表作为结果”

##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -79,16 +79,15 @@ SELECT CURRENT_TIMESTAMP;
 
 {% top %}
 
-## Source Tables
+## Source 表
 
-As with all SQL engines, Flink queries operate on top of tables. 
-It differs from a traditional database because Flink does not manage data at 
rest locally; instead, its queries operate continuously over external tables. 
+与所有 SQL 引擎一样,Flink 查询在表上进行操作。与传统数据库不同,因为 Flink 不在本地管理静态数据;相反,它的查询在外部表上连续运行。
 
-Flink data processing pipelines begin with source tables. Source tables 
produce rows operated over during the query's execution; they are the tables 
referenced in the `FROM` clause of a query.  These could be Kafka topics, 
databases, filesystems, or any other system that Flink knows how to consume. 
+Flink 数据处理管道开始于 source 表。在查询执行期间,source 表产生操作的行;它们是查询时 `FROM` 子句中引用的表。这些表可能是 
Kafka 的 topics,数据库,文件系统,或 Flink 知道如何消费的任何其他系统。
 
-Tables can be defined through the SQL client or using environment config file. 
The SQL client support [SQL DDL commands]({% link dev/table/sql/index.zh.md %}) 
similar to traditional SQL. Standard SQL DDL is used to [create]({% link 
dev/table/sql/create.zh.md %}), [alter]({% link dev/table/sql/alter.zh.md %}), 
[drop]({% link dev/table/sql/drop.zh.md %}) tables. 
+可以通过 SQL 客户端或使用环境配置文件来定义表。SQL 客户端支持类似于传统 SQL 的 [SQL DDL 命令]({% link 
dev/table/sql/index.zh.md %})。标准 SQL DDL 用于[创建]({% link 
dev/table/sql/create.zh.md %}),[修改]({% link dev/table/sql/alter.zh.md 
%}),[删除]({% link dev/table/sql/drop.zh.md %})表。
 
-Flink has a support for different [connectors]({% link dev/table/connect.zh.md 
%}) and [formats]({%link dev/table/connectors/formats/index.zh.md %}) that can 
be used with tables. Following is an example to define a source table backed by 
a [CSV file]({%link dev/table/connectors/formats/csv.zh.md %}) with `emp_id`, 
`name`, `dept_id` as columns in a `CREATE` table statement.
+Flink 支持可以与表一起使用的不同[连接器]({% link dev/table/connect.zh.md %})和[格式]({% link 
dev/table/connectors/formats/index.zh.md %})。下面是一个示例,定义一个[CSV 文件]({% link 
dev/table/connectors/formats/csv.zh.md %})作为 source 表,其中 
`emp_id`,`name`,`dept_id` 作为 `CREATE` 表语句中的列。

Review comment:
       “Flink has a support for different [connectors] and [formats] that can 
be used with tables.” -> "Flink 支持 不同的 连接器和格式相结合以定义表。" 这样子翻译不知道准不准确?
   
   "a source table backed by a CSV file" -> "以 CSV 文件作为存储格式的 source 表"

##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -79,16 +79,15 @@ SELECT CURRENT_TIMESTAMP;
 
 {% top %}
 
-## Source Tables
+## Source 表
 
-As with all SQL engines, Flink queries operate on top of tables. 
-It differs from a traditional database because Flink does not manage data at 
rest locally; instead, its queries operate continuously over external tables. 
+与所有 SQL 引擎一样,Flink 查询在表上进行操作。与传统数据库不同,因为 Flink 不在本地管理静态数据;相反,它的查询在外部表上连续运行。

Review comment:
       “Flink 查询在表上进行操作” -> “Flink 查询操作是在表上进行”
   
   “与传统数据库不同,因为 Flink 不在本地管理静态数据“ 似乎不太通顺? 去掉“因为”

##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -113,13 +112,13 @@ SELECT * from employee_information WHERE DeptId = 1;
 
 {% top %}
 
-## Continuous Queries
+## 连续查询
 
-While not designed initially with streaming semantics in mind, SQL is a 
powerful tool for building continuous data pipelines. Where Flink SQL differs 
from traditional database queries is that is continuously consuming rows as the 
arrives and produces updates to its results. 
+虽然最初设计时没有考虑流语义,但 SQL 是用于构建连续数据管道的强大工具。Flink SQL 与传统数据库查询的不同之处在于,Flink SQL 
持续消费到达的行并对其结果进行更新。

Review comment:
       "pipline" -> "流水线"

##########
File path: docs/dev/table/sql/gettingStarted.zh.md
##########
@@ -22,54 +22,54 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink SQL makes it simple to develop streaming applications using standard 
SQL. It is easy to learn Flink if you have ever worked with a database or SQL 
like system by remaining ANSI-SQL 2011 compliant. This tutorial will help you 
get started quickly with a Flink SQL development environment. 
+Flink SQL 使得使用标准 SQL 开发流应用程序变的简单。如果你曾经在工作中使用过兼容 ANSI-SQL 2011 的数据库或类似的 SQL 
系统,那么就很容易学习 Flink。本教程将帮助你快速入门 Flink SQL 开发环境。
  
 * This will be replaced by the TOC
 {:toc}
 
 
-### Prerequisetes 
+### 先决条件
 
-You only need to have basic knowledge of SQL to follow along. No other 
programming experience is assumed. 
+你只需要具备 SQL 的基础知识即可,不需要其他编程经验。
 
-### Installation
+### 安装
 
-There are multiple ways to install Flink. For experimentation, the most common 
option is to download the binaries and run them locally. You can follow the 
steps in [local installation]({%link try-flink/local_installation.zh.md %}) to 
set up an environment for the rest of the tutorial. 
+安装 Flink 有多种方式。为了实验,最常见的选择是下载二进制包并在本地运行。你可以按照[本地模式安装]({% link 
try-flink/local_installation.zh.md %})中的步骤为本教程的剩余部分设置环境。

Review comment:
       For experimentation -> 对于实验而言




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to