RocMarshal commented on a change in pull request #18718:
URL: https://github.com/apache/flink/pull/18718#discussion_r805170553



##########
File path: docs/content.zh/docs/connectors/datastream/filesystem.md
##########
@@ -28,39 +28,34 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# FileSystem
+# 文件系统
 
-This connector provides a unified Source and Sink for `BATCH` and `STREAMING` 
that reads or writes (partitioned) files to file systems
-supported by the [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}). This filesystem
-connector provides the same guarantees for both `BATCH` and `STREAMING` and is 
designed to provide exactly-once semantics for `STREAMING` execution.
+连接器提供了统一的 Source 和 Sink 在 `BATCH` 和 `STREAMING` 两种模式下,连接文件系统对文件进行读或写(包含分区文件)
+由 [Flink `FileSystem` abstraction]({{< ref 
"docs/deployment/filesystems/overview" >}}) 提供支持。文件系统连接器同时为 `BATCH` 和 
`STREAMING` 模式提供了相同的保证,并且被设计的执行过程为 `STREAMING` 模式提供了精确一次(exactly-once)语义。
 
-The connector supports reading and writing a set of files from any 
(distributed) file system (e.g. POSIX, S3, HDFS)
-with a [format]({{< ref "docs/connectors/datastream/formats/overview" >}}) 
(e.g., Avro, CSV, Parquet),
-and produces a stream or records.
+连接器支持从任何文件系统(包括分布式的,例如,POSIX、 S3、 HDFS)通过某种数据格式 [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}) (例如,Avro、 CSV、 Parquet) 
生成一个流或者多个记录,然后对文件进行读取或写入。
 
-## File Source
+## 文件数据源
 
-The `File Source` is based on the [Source API]({{< ref 
"docs/dev/datastream/sources" >}}#the-data-source-api),
-a unified data source that reads files - both in batch and in streaming mode.
-It is divided into the following two parts: `SplitEnumerator` and 
`SourceReader`.
+ `File Source` 是基于 [Source API]({{< ref "docs/dev/datastream/sources" 
>}}#the-data-source-api) 的,一种读取文件的统一数据源 - 同时支持批和流两种模式。
+可以分为以下两个部分:`SplitEnumerator` 和 `SourceReader`。
 
-* `SplitEnumerator` is responsible for discovering and identifying the files 
to read and assigns them to the `SourceReader`.
-* `SourceReader` requests the files it needs to process and reads the file 
from the filesystem.
+* `SplitEnumerator` 负责发现和识别要读取的文件,并且指派这些文件给 `SourceReader`。
+* `SourceReader` 请求需要处理的文件,并从文件系统中读取该文件。
 
-You will need to combine the File Source with a [format]({{< ref 
"docs/connectors/datastream/formats/overview" >}}), which allows you to
-parse CSV, decode AVRO, or read Parquet columnar files.
+你可能需要使用某个格式 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 合并文件源,允许你读取 CSV、 AVRO、 Parquet 数据格式文件。

Review comment:
       nit: 
   ```suggestion
   你可能需要指定某种 [format]({{< ref "docs/connectors/datastream/formats/overview" 
>}}) 与 `File Source` 联合进行解析 CSV、解码AVRO、或者读取 Parquet 列式文件。
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to