Mrzyxing commented on code in PR #20510:
URL: https://github.com/apache/flink/pull/20510#discussion_r948565208


##########
docs/content.zh/docs/try-flink/table_api.md:
##########
@@ -275,38 +270,38 @@ public static Table report(Table transactions) {
 }
 ```
 
-This defines your application as using one hour tumbling windows based on the 
timestamp column.
-So a row with timestamp `2019-06-01 01:23:47` is put in the `2019-06-01 
01:00:00` window.
+上面的代码含义为:应用使用滚动窗口,窗口按照指定的时间戳字段划分,区间为一小时。
+所以,时间戳为 `2019-06-01 01:23:47` 的行会进入窗口 `2019-06-01 01:00:00`中。
 
+不同于其他属性,时间在一个持续不断的流式应用中总是向前移动,因此基于时间的聚合总是不重复的。
 
-Aggregations based on time are unique because time, as opposed to other 
attributes, generally moves forward in a continuous streaming application.
-Unlike `floor` and your UDF, window functions are 
[intrinsics](https://en.wikipedia.org/wiki/Intrinsic_function), which allows 
the runtime to apply additional optimizations.
-In a batch context, windows offer a convenient API for grouping records by a 
timestamp attribute.
+不同于 `floor` 以及 UDF,窗口函数是 
[内部的][intrinsics](https://en.wikipedia.org/wiki/Intrinsic_function),可以运行时优化。
+批环境中,如果需要按照时间属性分组数据,窗口函数也有便利的 API。
 
-Running the test with this implementation will also pass. 
+按此逻辑实现,测试也可以通过。
 
-## Once More, With Streaming!
+## 再用流式处理一次!
 
-And that's it, a fully functional, stateful, distributed streaming application!
-The query continuously consumes the stream of transactions from Kafka, 
computes the hourly spendings, and emits results as soon as they are ready.
-Since the input is unbounded, the query keeps running until it is manually 
stopped.
-And because the Job uses time window-based aggregations, Flink can perform 
specific optimizations such as state clean up when the framework knows that no 
more records will arrive for a particular window.
+这次的编写的应用是一个功能齐全、有状态的分布式流式应用!
+查询语句持续消费 Kafka 中流式的交易数据,然后计算每小时的消费,最后当窗口结束时立刻提交结果。

Review Comment:
   Yes, I do think so, just missing world '流'.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to