This is an automated email from the ASF dual-hosted git repository.

dockerzhang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/inlong-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 4eed246108 [INLONG-842][Doc] Remove the outdated example guide (#843)
4eed246108 is described below

commit 4eed246108287896041643f531fe70419d7d94b8
Author: Charles Zhang <[email protected]>
AuthorDate: Wed Aug 30 09:34:52 2023 +0800

    [INLONG-842][Doc] Remove the outdated example guide (#843)
---
 docs/deployment/docker.md                          |   2 +-
 docs/deployment/standalone.md                      |   2 +-
 docs/quick_start/file_pulsar_clickhouse_example.md |   4 +-
 docs/quick_start/hive_example.md                   |  60 ----------------
 docs/quick_start/img/create-group.png              | Bin 27891 -> 0 bytes
 docs/quick_start/img/create-stream.png             | Bin 21593 -> 0 bytes
 docs/quick_start/img/data-information.png          | Bin 14760 -> 0 bytes
 docs/quick_start/img/file-source.png               | Bin 16903 -> 0 bytes
 docs/quick_start/img/hive-config.png               | Bin 49673 -> 0 bytes
 docs/quick_start/img/pulsar-arch.png               | Bin 19399 -> 0 bytes
 docs/quick_start/img/pulsar-data.png               | Bin 31754 -> 0 bytes
 docs/quick_start/img/pulsar-group.png              | Bin 45807 -> 0 bytes
 docs/quick_start/img/pulsar-hive.png               | Bin 50243 -> 0 bytes
 docs/quick_start/img/pulsar-stream.png             | Bin 21593 -> 0 bytes
 docs/quick_start/mysql_kafka_clickhouse_example.md |   4 +-
 docs/quick_start/pulsar_example.md                 |  77 ---------------------
 .../current/deployment/docker.md                   |   2 +-
 .../current/deployment/standalone.md               |   2 +-
 .../quick_start/file_pulsar_clickhouse_example.md  |   4 +-
 .../current/quick_start/hive_example.md            |  62 -----------------
 .../current/quick_start/img/create-group.png       | Bin 29956 -> 0 bytes
 .../current/quick_start/img/create-stream.png      | Bin 24293 -> 0 bytes
 .../current/quick_start/img/data-information.png   | Bin 23356 -> 0 bytes
 .../current/quick_start/img/file-source.png        | Bin 19493 -> 0 bytes
 .../current/quick_start/img/hive-config.png        | Bin 50538 -> 0 bytes
 .../current/quick_start/img/pulsar-arch.png        | Bin 9060 -> 0 bytes
 .../current/quick_start/img/pulsar-data.png        | Bin 33340 -> 0 bytes
 .../current/quick_start/img/pulsar-group.png       | Bin 49125 -> 0 bytes
 .../current/quick_start/img/pulsar-hive.png        | Bin 24108 -> 0 bytes
 .../current/quick_start/img/pulsar-stream.png      | Bin 24385 -> 0 bytes
 .../quick_start/mysql_kafka_clickhouse_example.md  |   4 +-
 .../current/quick_start/pulsar_example.md          |  75 --------------------
 32 files changed, 12 insertions(+), 286 deletions(-)

diff --git a/docs/deployment/docker.md b/docs/deployment/docker.md
index 41403f3760..83bcf30199 100644
--- a/docs/deployment/docker.md
+++ b/docs/deployment/docker.md
@@ -49,7 +49,7 @@ Service URL is `pulsar://pulsar:6650`, Admin URL is 
`http://pulsar:8080`.
 :::
 
 ## Use
-You can refer [Pulsar Example](quick_start/pulsar_example.md) to create Data 
Stream.
+You can refer to [Example](quick_start/file_pulsar_clickhouse_example.md) to 
create a Data Stream.
 
 ## Destroy
 ```shell
diff --git a/docs/deployment/standalone.md b/docs/deployment/standalone.md
index 1ae41ede58..70f2883026 100644
--- a/docs/deployment/standalone.md
+++ b/docs/deployment/standalone.md
@@ -85,4 +85,4 @@ The ClusterTags selects the newly created `default_cluster`, 
and then configurin
 :::
 
 ## Use
-You can refer [Pulsar Example](quick_start/pulsar_example.md) to create Data 
Stream.
\ No newline at end of file
+You can refer to [Example](quick_start/file_pulsar_clickhouse_example.md) to 
create a Data Stream.
diff --git a/docs/quick_start/file_pulsar_clickhouse_example.md 
b/docs/quick_start/file_pulsar_clickhouse_example.md
index f2a5c22022..cb1fbf438b 100644
--- a/docs/quick_start/file_pulsar_clickhouse_example.md
+++ b/docs/quick_start/file_pulsar_clickhouse_example.md
@@ -1,6 +1,6 @@
 ---
-title: File -> Pulsar -> ClickHouse Example 
-sidebar_position: 4
+title: File -> Pulsar -> ClickHouse 
+sidebar_position: 2
 ---
 
 Here we use an example to introduce how to create File -> Pulsar -> ClickHouse 
data ingestion.
diff --git a/docs/quick_start/hive_example.md b/docs/quick_start/hive_example.md
deleted file mode 100644
index b177c33727..0000000000
--- a/docs/quick_start/hive_example.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title: Hive Example
-sidebar_position: 2
----
-
-Here we use a simple example to help you experience InLong.
-
-## Install Hive
-Hive is the necessary component. If you don't have Hive in your machine, we 
recommand using Docker to install it. Details can be found 
[here](https://github.com/big-data-europe/docker-hive).
-
-> Note that if you use Docker, you need to add a port mapping `8020:8020`, 
because it's the port of HDFS DefaultFS, and we need to use it later.
-
-## Install InLong
-Before we begin, we need to install InLong. Here we provide two ways:
-1. Install InLong with Docker by according to the [instructions 
here](deployment/docker.md).(Recommanded)
-2. Install InLong binary according to the [instructions 
here](deployment/bare_metal.md).
-
-## Create a data access
-After deployment, we first enter the "Data Access" interface, click "Create an 
Access" in the upper right corner to create a new date access, and fill in the 
data streams group information as shown in the figure below.
-
-![Create Group](img/create-group.png)
-
-Then we click the next button, and fill in the stream information as shown in 
the figure below.
-
-![Create Stream](img/create-stream.png)
-
-Note that the message source is "File", you can create a data source manually 
and configure `Agent Address` and `File Path`.
-
-![File Source](img/file-source.png)
-
-Then we fill in the following information in the "data information" column 
below.
-
-![Data Information](img/data-information.png)
-
-Then we select Hive in the data flow and click "Add" to add Hive configuration
-
-![Hive Config](img/hive-config.png)
-
-Note that the target table does not need to be created in advance, as InLong 
Manager will automatically create the table for us after the access is 
approved. Also, please use connection test to ensure that InLong Manager can 
connect to your Hive.
-
-Then we click the "Submit for Approval" button, the connection will be created 
successfully and enter the approval state.
-
-## Approve the data access
-Then we enter the "Approval Management" interface and click "My Approval" to 
approve the data access that we just applied for.
-
-At this point, the data access has been created successfully. We can see that 
the corresponding table has been created in Hive, and we can see that the 
corresponding topic has been created successfully in the management GUI of 
TubeMQ.
-
-## Configure the agent file
-Then we need to create a new file `/data/collect-data/test.log` and add 
content to it to trigger the agent to send data to the dataproxy.
-
-``` shell
-mkdir collect-data
-END=100000
-for ((i=1;i<=END;i++)); do
-    sleep 3
-    echo "name_$i | $i" >> /data/collect-data/test.log
-done
-```
-
-Then you can observe the Audit Data Pages, and see that the data has been 
collected and sent successfully.
\ No newline at end of file
diff --git a/docs/quick_start/img/create-group.png 
b/docs/quick_start/img/create-group.png
deleted file mode 100644
index 0694f780b6..0000000000
Binary files a/docs/quick_start/img/create-group.png and /dev/null differ
diff --git a/docs/quick_start/img/create-stream.png 
b/docs/quick_start/img/create-stream.png
deleted file mode 100644
index 03ecb4ac0c..0000000000
Binary files a/docs/quick_start/img/create-stream.png and /dev/null differ
diff --git a/docs/quick_start/img/data-information.png 
b/docs/quick_start/img/data-information.png
deleted file mode 100644
index e5704bb4c5..0000000000
Binary files a/docs/quick_start/img/data-information.png and /dev/null differ
diff --git a/docs/quick_start/img/file-source.png 
b/docs/quick_start/img/file-source.png
deleted file mode 100644
index e75664938b..0000000000
Binary files a/docs/quick_start/img/file-source.png and /dev/null differ
diff --git a/docs/quick_start/img/hive-config.png 
b/docs/quick_start/img/hive-config.png
deleted file mode 100644
index 007c615d3d..0000000000
Binary files a/docs/quick_start/img/hive-config.png and /dev/null differ
diff --git a/docs/quick_start/img/pulsar-arch.png 
b/docs/quick_start/img/pulsar-arch.png
deleted file mode 100644
index 1afa12f9b6..0000000000
Binary files a/docs/quick_start/img/pulsar-arch.png and /dev/null differ
diff --git a/docs/quick_start/img/pulsar-data.png 
b/docs/quick_start/img/pulsar-data.png
deleted file mode 100644
index fdb1bdbace..0000000000
Binary files a/docs/quick_start/img/pulsar-data.png and /dev/null differ
diff --git a/docs/quick_start/img/pulsar-group.png 
b/docs/quick_start/img/pulsar-group.png
deleted file mode 100644
index a1b7ce71a3..0000000000
Binary files a/docs/quick_start/img/pulsar-group.png and /dev/null differ
diff --git a/docs/quick_start/img/pulsar-hive.png 
b/docs/quick_start/img/pulsar-hive.png
deleted file mode 100644
index d070608610..0000000000
Binary files a/docs/quick_start/img/pulsar-hive.png and /dev/null differ
diff --git a/docs/quick_start/img/pulsar-stream.png 
b/docs/quick_start/img/pulsar-stream.png
deleted file mode 100644
index 03ecb4ac0c..0000000000
Binary files a/docs/quick_start/img/pulsar-stream.png and /dev/null differ
diff --git a/docs/quick_start/mysql_kafka_clickhouse_example.md 
b/docs/quick_start/mysql_kafka_clickhouse_example.md
index 8a32c268c5..0952f59df5 100644
--- a/docs/quick_start/mysql_kafka_clickhouse_example.md
+++ b/docs/quick_start/mysql_kafka_clickhouse_example.md
@@ -1,6 +1,6 @@
 ---
-title: MySQL -> Kafka -> ClickHouse Example
-sidebar_position: 5
+title: MySQL -> Kafka -> ClickHouse
+sidebar_position: 3
 ---
 
 Here we use an example to introduce how to use Apache InLong creating MySQL -> 
Kafka -> ClickHouse data ingestion.
diff --git a/docs/quick_start/pulsar_example.md 
b/docs/quick_start/pulsar_example.md
deleted file mode 100644
index 3bdf08cd85..0000000000
--- a/docs/quick_start/pulsar_example.md
+++ /dev/null
@@ -1,77 +0,0 @@
----
-title: Pulsar Example
-sidebar_position: 2
----
-
-Apache InLong has increased the ability to access data through Apache Pulsar, 
taking full advantage of Pulsar's technical advantages that are different from 
other MQ, and providing complete solutions for data integration scenarios with 
higher data quality requirements such as finance and billing.
-In the following content, we will use a complete example to introduce Apache 
Pulsar to access data through Apache InLong.
-
-![Create Group](img/pulsar-arch.png)
-
-## Install Pulsar
-Please refer to [Official Installation 
Guidelines](https://pulsar.apache.org/docs/en/standalone/).
-
-## Install Hive
-Hive is the necessary component. If you don't have Hive in your machine, we 
recommand using Docker to install it. Details can be found 
[here](https://github.com/big-data-europe/docker-hive).
-
-> Note that if you use Docker, you need to add a port mapping `8020:8020`, 
because it's the port of HDFS DefaultFS, and we need to use it later.
-
-## Install InLong
-Before we begin, we need to install InLong. Here we provide two ways:
-1. Install InLong with Docker by according to the [instructions 
here](deployment/docker.md).(Recommanded)
-2. Install InLong binary according to the [instructions 
here](deployment/bare_metal.md).
-
-## Create a data ingestion
-### Configure data streams group information
-![](img/pulsar-group.png)
-When creating data ingestion, the message middleware that the data stream 
group can use is Pulsar, 
-and other configuration items related to Pulsar include:
-- Queue module: Parallel or Serial, when selecting parallel, you can set the 
number of topic partitions
-- Write quorum: Number of copies to store for each message
-- Ack quorum: Number of guaranteed copies (acks to wait before write is 
complete)
-- retention time: retention time for the consumed message
-- ttl: The default Time to Live for message
-- retention size: retention size for the consumed message
-
-### Configure data stream
-![](img/pulsar-stream.png)
-
-### Configure File Agent
-![](img/file-source.png)
-
-### Configure data information
-![](img/pulsar-data.png)
-
-### Configure Hive cluster
-Save Hive cluster information, click "Ok" to submit.
-![](img/pulsar-hive.png)
-
-## Data ingestion Approval
-Enter **Approval** page, click **My Approval**, abd approve the data ingestion 
application. After the approval is over, 
-the topics and subscriptions required for the data stream will be created in 
the Pulsar cluster synchronously.
-We can use the command-line tool in the Pulsar cluster to check whether the 
topic is created successfully.
-
-## Configure File Agent
-Then we need to create a new file `/data/collect-data/test.log` and add 
content to it to trigger the agent to send data to the dataproxy.
-
-``` shell
-mkdir collect-data
-END=100000
-for ((i=1;i<=END;i++)); do
-    sleep 3
-    echo "name_$i | $i" >> /data/collect-data/test.log
-done
-```
-
-Then you can observe the Audit Data Pages, and see that the data has been 
collected and sent successfully.
-
-## Data Check
-Finally, we log in to the Hive cluster and use Hive SQL commands to check 
-whether data is successfully inserted in the `test_stream` table.
-
-## Troubleshooting
-If data is not correctly written to the Hive cluster, you can check whether 
the `DataProxy` and `Sort` related information are synchronized:
-- Check whether the topic information corresponding to the data stream is 
correctly written in the `conf/topics.properties` folder of `InLong DataProxy`:
-```
-b_test_group/test_stream=persistent://public/b_test_group/test_stream
-```
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/docker.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/docker.md
index 5d7b1753aa..4558502ab1 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/docker.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/docker.md
@@ -49,7 +49,7 @@ Service URL 为 `pulsar://pulsar:6650`,Admin URL 为 
`http://pulsar:8080`。
 :::
 
 ## 使用
-创建数据流可以参考 [Pulsar Example](quick_start/pulsar_example.md).
+创建数据流可以参考 [Example](quick_start/file_pulsar_clickhouse_example.md).
 
 ## 销毁
 ```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/standalone.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/standalone.md
index 7c72553536..febc29ddee 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/standalone.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/standalone.md
@@ -85,5 +85,5 @@ Password: inlong
 :::
 
 ## 使用
-创建数据流可以参考 [Pulsar Example](quick_start/pulsar_example.md).
+创建数据流可以参考 [Example](quick_start/file_pulsar_clickhouse_example.md).
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/file_pulsar_clickhouse_example.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/file_pulsar_clickhouse_example.md
index 9efddab797..5964006cbc 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/file_pulsar_clickhouse_example.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/file_pulsar_clickhouse_example.md
@@ -1,6 +1,6 @@
 ---
-title: File -> Pulsar -> ClickHouse 示例
-sidebar_position: 4
+title: File -> Pulsar -> ClickHouse
+sidebar_position: 2
 ---
 
 在下面的内容中,我们将通过一个完整的示例介绍如何使用 Apache InLong 创建 File -> Pulsar -> ClickHouse 数据流。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/hive_example.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/hive_example.md
deleted file mode 100644
index 701b43a2b5..0000000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/hive_example.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-title: 入库 Hive 示例
-sidebar_position: 2
----
-
-本节用一个简单的示例,帮助您快速体验 InLong 的完整流程。
-
-
-## 安装 Hive
-Hive 是运行的必备组件。如果您的机器上没有 Hive,这里推荐使用 Docker 进行快速安装,详情可见 
[这里](https://github.com/big-data-europe/docker-hive)。
-
-> 注意,如果使用以上 Docker 镜像的话,我们需要在 namenode 中添加一个端口映射 `8020:8020`,因为它是 HDFS 
DefaultFS 的端口,后面在配置 Hive 时需要用到。
-
-## 安装 InLong
-在开始之前,我们需要安装 InLong 的全部组件,这里提供两种方式:
-1. 按照 [这里的说明](deployment/docker.md),使用 Docker 进行快速部署。(推荐)
-2. 按照 [这里的说明](deployment/bare_metal.md),使用二进制包依次安装各组件。
-
-
-## 新建接入
-部署完毕后,首先我们进入 “数据接入” 界面,点击右上角的 “新建接入”,新建一条接入,按下图所示填入数据流 Group 信息
-
-![Create Group](img/create-group.png)
-
-然后点击下一步,按下图所示填入数据流信息
-
-![Create Stream](img/create-stream.png)
-
-注意其中消息来源选择“文件”,并“新建数据源”,配置 `Agent 地址`及采集`文件路径`:
-
-![File Source](img/file-source.png)
-
-然后我们在下面的“数据信息”一栏中填入以下信息
-
-![Data Information](img/data-information.png)
-
-然后在数据流向中选择 Hive,并点击 “添加”,添加 Hive 配置
-
-![Hive Config](img/hive-config.png)
-
-注意这里目标表无需提前创建,InLong Manager 会在接入通过之后自动为我们创建表。另外,请使用 “连接测试” 保证 InLong Manager 
可以连接到你的 Hive。
-
-然后点击“提交审批”按钮,该接入就会创建成功,进入审批状态。
-
-## 审批接入
-进入“审批管理”界面,点击“我的审批”,将刚刚申请的接入通过。
-
-到此接入就已经创建完毕了,我们可以在 Hive 中看到相应的表已经被创建,并且在 TubeMQ 的管理界面中可以看到相应的 topic 已经创建成功。
-
-## 配置 Agent 采集文件
-接下来我们可以新建 `/data/collect-data/test.log` ,并往里面添加内容,来触发 agent 向 dataproxy 发送数据了。
-
-``` shell
-mkdir collect-data
-END=100000
-for ((i=1;i<=END;i++)); do
-    sleep 3
-    echo "name_$i | $i" >> /data/collect-data/test.log
-done
-```
-
-可以观察审计数据页面,看到数据已经成功采集和发送。
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/create-group.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/create-group.png
deleted file mode 100644
index 591dcba965..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/create-group.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/create-stream.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/create-stream.png
deleted file mode 100644
index 9d91755c37..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/create-stream.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/data-information.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/data-information.png
deleted file mode 100644
index 8c0742b651..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/data-information.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/file-source.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/file-source.png
deleted file mode 100644
index 3958c72cb8..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/file-source.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/hive-config.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/hive-config.png
deleted file mode 100644
index 2ea45ef865..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/hive-config.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-arch.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-arch.png
deleted file mode 100644
index a54d1e8d88..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-arch.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-data.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-data.png
deleted file mode 100644
index da20e156b4..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-data.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-group.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-group.png
deleted file mode 100644
index 3b0f94d049..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-group.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-hive.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-hive.png
deleted file mode 100644
index 3651a0726e..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-hive.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-stream.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-stream.png
deleted file mode 100644
index 5310d1f28b..0000000000
Binary files 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/img/pulsar-stream.png
 and /dev/null differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/mysql_kafka_clickhouse_example.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/mysql_kafka_clickhouse_example.md
index a6021a5053..91f649d5ec 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/mysql_kafka_clickhouse_example.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/mysql_kafka_clickhouse_example.md
@@ -1,6 +1,6 @@
 ---
-title: MySQL -> Kafka -> ClickHouse 示例
-sidebar_position: 5
+title: MySQL -> Kafka -> ClickHouse
+sidebar_position: 3
 ---
 
 在下面的内容中,我们将通过一个完整的示例介绍如何使用 Apache InLong 创建 MySQL -> Kafka -> ClickHouse 数据链路。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/pulsar_example.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/pulsar_example.md
deleted file mode 100644
index 9eb30054ed..0000000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/quick_start/pulsar_example.md
+++ /dev/null
@@ -1,75 +0,0 @@
----
-title: 使用 Pulsar 示例
-sidebar_position: 2
----
-
-Apache InLong 增加了通过 Apache Pulsar 接入数据的能力,充分利用了 Pulsar 不同于其它 MQ 
的技术优势,为金融、计费等数据质量要求更高的数据接入场景,提供完整的解决方案。
-在下面的内容中,我们将通过一个完整的示例介绍如何通过 Apache InLong 使用 Apache Pulsar 接入数据。
-
-![Create Group](img/pulsar-arch.png)
-
-## 安装 Pulsar
-部署Apache Pulsar 集群可以参考[官方安装指引](https://pulsar.apache.org/docs/en/standalone/).
-
-## 安装 Hive
-Hive 是运行的必备组件。如果您的机器上没有 Hive,这里推荐使用 Docker 进行快速安装,详情可见 
[这里](https://github.com/big-data-europe/docker-hive)。
-
-> 注意,如果使用以上 Docker 镜像的话,我们需要在 namenode 中添加一个端口映射 `8020:8020`,因为它是 HDFS 
DefaultFS 的端口,后面在配置 Hive 时需要用到。
-
-## 安装 InLong
-在开始之前,我们需要安装 InLong 的全部组件,这里提供两种方式:
-1. 按照 [这里的说明](deployment/docker.md),使用 Docker 进行快速部署。(推荐)
-2. 按照 [这里的说明](deployment/bare_metal.md),使用二进制包依次安装各组件。
-
-## 创建数据接入
-### 配置数据流 Group 信息
-![](img/pulsar-group.png)
-在创建数据接入时,数据流 Group 可选用的消息中间件选择 Pulsar,其它跟 Pulsar 相关的配置项还包括:
-- Queue module:队列模型,并行或者顺序,选择并行时可设置 Topic 的分区数,顺序则为一个分区;
-- Write quorum:消息写入的副本数
-- Ack quorum:确认写入 Bookies 的数量
-- retention time:已被 consumer 确认的消息被保存的时间
-- ttl:未被确认的消息的过期时间
-- retention size:已被 consumer 确认的消息被保存的大小
-
-### 配置数据流
-![](img/pulsar-stream.png)
-
-### 配置文件 Agent
-![](img/file-source.png)
-
-### 配置数据格式
-![](img/pulsar-data.png)
-
-### 配置 Hive 集群
-保存 Hive 集群信息,点击“确定”。
-![](img/pulsar-hive.png)
-
-## 数据接入审批
-进入**审批管理**页面,点击**我的审批**,审批上面提交的接入申请,审批结束后会在 Pulsar 集群同步创建数据流需要的 Topic 和订阅。
-我们可以在 Pulsar 集群使用命令行工具检查 Topic 是否创建成功。
-
-## 配置 Agent 采集文件
-接下来我们可以新建 `/data/collect-data/test.log` ,并往里面添加内容,来触发 agent 向 dataproxy 发送数据了。
-
-``` shell
-mkdir collect-data
-END=100000
-for ((i=1;i<=END;i++)); do
-    sleep 3
-    echo "name_$i | $i" >> /data/collect-data/test.log
-done
-```
-
-可以观察审计数据页面,看到数据已经成功采集和发送。
-
-## 数据落地检查
-
-最后,我们登入 Hive 集群,通过 Hive 的 SQL 命令查看 `test_stream` 表中是否成功插入了数据。
-
-## 问题排查
-如果出现数据未正确写入 Hive 集群,可以检查 `DataProxy` 和 `Sort` 相关信息是否同步:
-- 检查 `InLong DataProxy` 的 `conf/topics.properties` 文件夹中是否正确写入该数据流对应的Topic 信息:
-```
-b_test_group/test_stream=persistent://public/b_test_group/test_stream
-```
\ No newline at end of file

Reply via email to