This is an automated email from the ASF dual-hosted git repository.
lzljs3620320 pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git
The following commit(s) were added to refs/heads/release-1.12 by this push:
new 3c432d8 [FLINK-20327][doc] The Hive's read/write page should redirect
to SQL Fileystem connector
3c432d8 is described below
commit 3c432d82ee4fa5607713d0395866c00358c13cd2
Author: Leonard Xu <[email protected]>
AuthorDate: Fri Nov 27 10:45:26 2020 +0800
[FLINK-20327][doc] The Hive's read/write page should redirect to SQL
Fileystem connector
This closes #14231
---
docs/dev/table/connectors/hive/hive_read_write.md | 3 +--
docs/dev/table/connectors/hive/hive_read_write.zh.md | 3 +--
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/docs/dev/table/connectors/hive/hive_read_write.md
b/docs/dev/table/connectors/hive/hive_read_write.md
index a779456..435ab0b 100644
--- a/docs/dev/table/connectors/hive/hive_read_write.md
+++ b/docs/dev/table/connectors/hive/hive_read_write.md
@@ -351,8 +351,7 @@ overwrite is not supported for streaming write.
The below shows how the streaming sink can be used to write a streaming query
to write data from Kafka into a Hive table with partition-commit,
and runs a batch query to read that data back out.
-Please see the [StreamingFileSink]({% link dev/connectors/streamfile_sink.md
%}) for
-a full list of available configurations.
+Please see the [streaming sink]({% link dev/table/connectors/filesystem.md
%}#streaming-sink) for a full list of available configurations.
{% highlight sql %}
diff --git a/docs/dev/table/connectors/hive/hive_read_write.zh.md
b/docs/dev/table/connectors/hive/hive_read_write.zh.md
index ec7b13e..c6fbd0a 100644
--- a/docs/dev/table/connectors/hive/hive_read_write.zh.md
+++ b/docs/dev/table/connectors/hive/hive_read_write.zh.md
@@ -351,8 +351,7 @@ overwrite is not supported for streaming write.
The below shows how the streaming sink can be used to write a streaming query
to write data from Kafka into a Hive table with partition-commit,
and runs a batch query to read that data back out.
-Please see the [StreamingFileSink]({% link
dev/connectors/streamfile_sink.zh.md %}) for
-a full list of available configurations.
+Please see the [streaming sink]({% link dev/table/connectors/filesystem.zh.md
%}#streaming-sink) for a full list of available configurations.
{% highlight sql %}