michaelli916 commented on a change in pull request #13459:
URL: https://github.com/apache/flink/pull/13459#discussion_r619767718



##########
File path: docs/content.zh/docs/connectors/table/filesystem.md
##########
@@ -149,15 +145,14 @@ a timeout that specifies the maximum duration for which a 
file can be open.
   </tbody>
 </table>
 
-**NOTE:** For bulk formats (parquet, orc, avro), the rolling policy in 
combination with the checkpoint interval(pending files
-become finished on the next checkpoint) control the size and number of these 
parts.
+**注意:** 对于 bulk 格式 (parquet, orc, avro), 滚动策略和检查点间隔控制了分区文件的大小和个数 
(未完成的文件会在下个检查点完成).
 
-**NOTE:** For row formats (csv, json), you can set the parameter 
`sink.rolling-policy.file-size` or `sink.rolling-policy.rollover-interval` in 
the connector properties and parameter `execution.checkpointing.interval` in 
flink-conf.yaml together
-if you don't want to wait a long period before observe the data exists in file 
system. For other formats (avro, orc), you can just set parameter 
`execution.checkpointing.interval` in flink-conf.yaml.
+**注意:** 对于行格式 (csv, json), 如果想使得分区文件更快地在文件系统中可见,可以设置连接器参数 
`sink.rolling-policy.file-size` 或 `sink.rolling-policy.rollover-interval` ,以及 
flink-conf.yaml 中的 `execution.checkpointing.interval` 。 
+对于其他格式 (avro, orc), 可以只设置 flink-conf.yaml 中的 
`execution.checkpointing.interval` 。
 
-### File Compaction
+### 文件压缩

Review comment:
       嗯,“文件压缩”容易造成歧义,确实“文件合并”更好。




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to