This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new a53f0d12b08 replace dead links in docs for ecosystem (#3437)
a53f0d12b08 is described below

commit a53f0d12b08487dc05a8adc3338928423c7a5971
Author: catpineapple <[email protected]>
AuthorDate: Tue Mar 10 00:40:52 2026 +0800

    replace dead links in docs for ecosystem (#3437)
    
    - smartbi.info/download → smartbi.com.cn
    - doc.bladepipe.com → www.bladepipe.com/docs (updated paths)
    - docs.getdbt.com/faqs/seeds/build-one-seed →
    docs.getdbt.com/docs/build/seeds
    - import/stream-load-manual → import/import-way/stream-load-manual
    
    ## Versions
    
    - [x] dev
    - [x] 4.x
    - [x] 3.x
    - [x] 2.1
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 blog/auto-partition-in-apache-doris.md                                  | 2 +-
 docs/ecosystem/bi/smartbi.md                                            | 2 +-
 docs/ecosystem/cloudcanal.md                                            | 2 +-
 docs/ecosystem/datax.md                                                 | 2 +-
 docs/ecosystem/dbt-doris-adapter.md                                     | 2 +-
 docs/ecosystem/doris-kafka-connector.md                                 | 2 +-
 .../current/ecosystem/dbt-doris-adapter.md                              | 2 +-
 .../version-2.1/ecosystem/dbt-doris-adapter.md                          | 2 +-
 .../version-3.x/ecosystem/dbt-doris-adapter.md                          | 2 +-
 .../version-4.x/ecosystem/dbt-doris-adapter.md                          | 2 +-
 versioned_docs/version-2.1/ecosystem/bi/smartbi.md                      | 2 +-
 versioned_docs/version-2.1/ecosystem/cloudcanal.md                      | 2 +-
 versioned_docs/version-2.1/ecosystem/datax.md                           | 2 +-
 versioned_docs/version-2.1/ecosystem/dbt-doris-adapter.md               | 2 +-
 versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md           | 2 +-
 versioned_docs/version-3.x/ecosystem/bi/smartbi.md                      | 2 +-
 versioned_docs/version-3.x/ecosystem/cloudcanal.md                      | 2 +-
 versioned_docs/version-3.x/ecosystem/datax.md                           | 2 +-
 versioned_docs/version-3.x/ecosystem/dbt-doris-adapter.md               | 2 +-
 versioned_docs/version-3.x/ecosystem/doris-kafka-connector.md           | 2 +-
 versioned_docs/version-4.x/ecosystem/bi/smartbi.md                      | 2 +-
 versioned_docs/version-4.x/ecosystem/cloudcanal.md                      | 2 +-
 versioned_docs/version-4.x/ecosystem/datax.md                           | 2 +-
 versioned_docs/version-4.x/ecosystem/dbt-doris-adapter.md               | 2 +-
 versioned_docs/version-4.x/ecosystem/doris-kafka-connector.md           | 2 +-
 25 files changed, 25 insertions(+), 25 deletions(-)

diff --git a/blog/auto-partition-in-apache-doris.md 
b/blog/auto-partition-in-apache-doris.md
index 245f1e82560..e5b4f2f9238 100644
--- a/blog/auto-partition-in-apache-doris.md
+++ b/blog/auto-partition-in-apache-doris.md
@@ -414,7 +414,7 @@ Dynamic Partition does not slow down data ingestion speed, 
while Auto Partition
 
 ## Auto Partition: ingestion workflow
 
-This part is about how data ingestion is implemented with the Auto Partition 
mechanism, and we use [Stream 
Load](https://doris.apache.org/docs/data-operate/import/stream-load-manual) as 
an example. When Doris initiates a data import, one of the Doris Backend nodes 
takes on the role of the Coordinator. It is responsible for the initial data 
processing work and then dispatching the data to the appropriate BE nodes, 
known as the Executors, for execution.
+This part is about how data ingestion is implemented with the Auto Partition 
mechanism, and we use [Stream 
Load](https://doris.apache.org/docs/data-operate/import/import-way/stream-load-manual)
 as an example. When Doris initiates a data import, one of the Doris Backend 
nodes takes on the role of the Coordinator. It is responsible for the initial 
data processing work and then dispatching the data to the appropriate BE nodes, 
known as the Executors, for execution.
 
 ![Auto Partition: ingestion 
workflow](/images/auto-partition-ingestion-workflow.png)
 
diff --git a/docs/ecosystem/bi/smartbi.md b/docs/ecosystem/bi/smartbi.md
index 6293459c6a3..f14e3ec7e3b 100644
--- a/docs/ecosystem/bi/smartbi.md
+++ b/docs/ecosystem/bi/smartbi.md
@@ -12,7 +12,7 @@ Smartbi is a collection of software services and application 
connectors that can
 
 ## Precondition
 
-you can visit  https://www.smartbi.info/download to download and install 
Smartbi.
+you can visit  https://www.smartbi.com.cn to download and install Smartbi.
 
 ## Data connection and application
 
diff --git a/docs/ecosystem/cloudcanal.md b/docs/ecosystem/cloudcanal.md
index 03b6cd5f2a7..a1b4edd590b 100644
--- a/docs/ecosystem/cloudcanal.md
+++ b/docs/ecosystem/cloudcanal.md
@@ -28,7 +28,7 @@ For more functions and parameter settings, please refer to 
[BladePipe Connection
 :::
 
 ## Installation
-Follow the instructions in [Install Worker 
(Docker)](https://doc.bladepipe.com/productOP/docker/install_worker_docker) or 
[Install Worker 
(Binary)](https://doc.bladepipe.com/productOP/binary/install_worker_binary) to 
download and install a BladePipe Worker.
+Follow the instructions in [Install Worker 
(Docker)](https://www.bladepipe.com/docs/productOP/byoc/installation/install_worker_docker)
 or [Install Worker 
(Binary)](https://www.bladepipe.com/docs/productOP/byoc/installation/install_worker_binary)
 to download and install a BladePipe Worker.
 
 ## Example
 Taking a MySQL instance as an example, the following part describes how to 
move data from MySQL to Doris. 
diff --git a/docs/ecosystem/datax.md b/docs/ecosystem/datax.md
index 6fd96b845bc..3c89ce7f148 100644
--- a/docs/ecosystem/datax.md
+++ b/docs/ecosystem/datax.md
@@ -123,7 +123,7 @@ Download the [source 
code](https://github.com/apache/doris/tree/master/extension
 
 * **loadProps**
 
-  - Description: The request parameter of StreamLoad. For details, refer to 
the StreamLoad introduction page. [Stream load - Apache 
Doris](https://doris.apache.org/docs/data-operate/import/stream-load-manual)
+  - Description: The request parameter of StreamLoad. For details, refer to 
the StreamLoad introduction page. [Stream load - Apache 
Doris](https://doris.apache.org/docs/data-operate/import/import-way/stream-load-manual)
 
     This includes the imported data format: format, etc. The imported data 
format defaults to csv, which supports JSON. For details, please refer to the 
type conversion section below, or refer to the official information of Stream 
load above.
 
diff --git a/docs/ecosystem/dbt-doris-adapter.md 
b/docs/ecosystem/dbt-doris-adapter.md
index c3049a2b1e5..f544f68a9ef 100644
--- a/docs/ecosystem/dbt-doris-adapter.md
+++ b/docs/ecosystem/dbt-doris-adapter.md
@@ -217,7 +217,7 @@ The details of the above configuration items are as follows:
 
 ### dbt-doris adapter seed
 
-[`seed`](https://docs.getdbt.com/faqs/seeds/build-one-seed) is a functional 
module used to load data files such as csv. It is a way to load files into the 
library and participate in model building, but there are the following 
precautions:
+[`seed`](https://docs.getdbt.com/docs/build/seeds) is a functional module used 
to load data files such as csv. It is a way to load files into the library and 
participate in model building, but there are the following precautions:
 1. Seeds should not be used to load raw data (for example, large CSV exports 
from a production database).
 2. Since seeds are version controlled, they are best suited to files that 
contain business-specific logic, for example a list of country codes or user 
IDs of employees.
 3. Loading CSVs using dbt's seed functionality is not performant for large 
files. Consider using `streamload` to load these CSVs into doris.
diff --git a/docs/ecosystem/doris-kafka-connector.md 
b/docs/ecosystem/doris-kafka-connector.md
index 5182cde3495..2371d999cc8 100644
--- a/docs/ecosystem/doris-kafka-connector.md
+++ b/docs/ecosystem/doris-kafka-connector.md
@@ -209,7 +209,7 @@ errors.deadletterqueue.topic.replication.factor=1
 | jmx                         | -                                    | true    
                                                                             | 
N            | To obtain connector internal monitoring indicators through JMX, 
please refer to: 
[Doris-Connector-JMX](https://github.com/apache/doris-kafka-connector/blob/master/docs/en/Doris-Connector-JMX.md)
                                                                                
                                              [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load label prefix when importing data. Defaults to the 
Connector application name.                                                     
                                                                                
                                                                                
                  [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | Whether to redirect StreamLoad requests. After being turned on, 
StreamLoad will redirect to the BE where data needs to be written through FE, 
and the BE information will no longer be displayed.       |
-| sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/stream-load-manual)  
<br/><br/> **Enable Group Commit**, for example, enable group commit in 
sync_mode mode: [...]
+| sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/import-way/stream-load-manual)
  <br/><br/> **Enable Group Commit**, for example, enable group commit in sync 
[...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | How to ensure data consistency when consuming Kafka data 
is imported into Doris. Supports `at_least_once` `exactly_once`, default is 
`at_least_once`. Doris needs to be upgraded to 2.1.0 or above to ensure data 
`exactly_once`   |
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | Type conversion mode of upstream data when using Connector to 
consume Kafka data. <br/> ```normal``` means consuming data in Kafka normally 
without any type conversion. <br/> ```debezium_ingestion``` means that when 
Kafka upstream data is collected through CDC (Changelog Data Capture) tools 
such as Debezium, the upstr [...]
 | debezium.schema.evolution   | `none`,<br/> `basic`                 | none    
                                                                             | 
N            | Use Debezium to collect upstream database systems (such as 
MySQL), and when structural changes occur, the added fields can be synchronized 
to Doris. <br/>`none` means that when the structure of the upstream database 
system changes, the changed structure will not be synchronized to Doris. <br/> 
`basic` means synchroniz [...]
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/dbt-doris-adapter.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/dbt-doris-adapter.md
index 1a85d183caf..75e1a7c12b8 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/dbt-doris-adapter.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/ecosystem/dbt-doris-adapter.md
@@ -229,7 +229,7 @@ models:
 
 ### dbt-doris adapter seed
 
-[`seed`](https://docs.getdbt.com/faqs/seeds/build-one-seed) 是用于加载 csv 
等数据文件时的功能模块,它是一种加载文件入库参与模型构建的一种方式,但有以下注意事项:
+[`seed`](https://docs.getdbt.com/docs/build/seeds) 是用于加载 csv 
等数据文件时的功能模块,它是一种加载文件入库参与模型构建的一种方式,但有以下注意事项:
 
 1. seed 不应用于加载原始数据(例如,从生产数据库导出大型 CSV 文件)。
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/dbt-doris-adapter.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/dbt-doris-adapter.md
index 1a85d183caf..75e1a7c12b8 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/dbt-doris-adapter.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/ecosystem/dbt-doris-adapter.md
@@ -229,7 +229,7 @@ models:
 
 ### dbt-doris adapter seed
 
-[`seed`](https://docs.getdbt.com/faqs/seeds/build-one-seed) 是用于加载 csv 
等数据文件时的功能模块,它是一种加载文件入库参与模型构建的一种方式,但有以下注意事项:
+[`seed`](https://docs.getdbt.com/docs/build/seeds) 是用于加载 csv 
等数据文件时的功能模块,它是一种加载文件入库参与模型构建的一种方式,但有以下注意事项:
 
 1. seed 不应用于加载原始数据(例如,从生产数据库导出大型 CSV 文件)。
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/dbt-doris-adapter.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/dbt-doris-adapter.md
index 1a85d183caf..75e1a7c12b8 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/dbt-doris-adapter.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/ecosystem/dbt-doris-adapter.md
@@ -229,7 +229,7 @@ models:
 
 ### dbt-doris adapter seed
 
-[`seed`](https://docs.getdbt.com/faqs/seeds/build-one-seed) 是用于加载 csv 
等数据文件时的功能模块,它是一种加载文件入库参与模型构建的一种方式,但有以下注意事项:
+[`seed`](https://docs.getdbt.com/docs/build/seeds) 是用于加载 csv 
等数据文件时的功能模块,它是一种加载文件入库参与模型构建的一种方式,但有以下注意事项:
 
 1. seed 不应用于加载原始数据(例如,从生产数据库导出大型 CSV 文件)。
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/ecosystem/dbt-doris-adapter.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/ecosystem/dbt-doris-adapter.md
index 1a85d183caf..75e1a7c12b8 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/ecosystem/dbt-doris-adapter.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/ecosystem/dbt-doris-adapter.md
@@ -229,7 +229,7 @@ models:
 
 ### dbt-doris adapter seed
 
-[`seed`](https://docs.getdbt.com/faqs/seeds/build-one-seed) 是用于加载 csv 
等数据文件时的功能模块,它是一种加载文件入库参与模型构建的一种方式,但有以下注意事项:
+[`seed`](https://docs.getdbt.com/docs/build/seeds) 是用于加载 csv 
等数据文件时的功能模块,它是一种加载文件入库参与模型构建的一种方式,但有以下注意事项:
 
 1. seed 不应用于加载原始数据(例如,从生产数据库导出大型 CSV 文件)。
 
diff --git a/versioned_docs/version-2.1/ecosystem/bi/smartbi.md 
b/versioned_docs/version-2.1/ecosystem/bi/smartbi.md
index 6293459c6a3..f14e3ec7e3b 100644
--- a/versioned_docs/version-2.1/ecosystem/bi/smartbi.md
+++ b/versioned_docs/version-2.1/ecosystem/bi/smartbi.md
@@ -12,7 +12,7 @@ Smartbi is a collection of software services and application 
connectors that can
 
 ## Precondition
 
-you can visit  https://www.smartbi.info/download to download and install 
Smartbi.
+you can visit  https://www.smartbi.com.cn to download and install Smartbi.
 
 ## Data connection and application
 
diff --git a/versioned_docs/version-2.1/ecosystem/cloudcanal.md 
b/versioned_docs/version-2.1/ecosystem/cloudcanal.md
index 03b6cd5f2a7..a1b4edd590b 100644
--- a/versioned_docs/version-2.1/ecosystem/cloudcanal.md
+++ b/versioned_docs/version-2.1/ecosystem/cloudcanal.md
@@ -28,7 +28,7 @@ For more functions and parameter settings, please refer to 
[BladePipe Connection
 :::
 
 ## Installation
-Follow the instructions in [Install Worker 
(Docker)](https://doc.bladepipe.com/productOP/docker/install_worker_docker) or 
[Install Worker 
(Binary)](https://doc.bladepipe.com/productOP/binary/install_worker_binary) to 
download and install a BladePipe Worker.
+Follow the instructions in [Install Worker 
(Docker)](https://www.bladepipe.com/docs/productOP/byoc/installation/install_worker_docker)
 or [Install Worker 
(Binary)](https://www.bladepipe.com/docs/productOP/byoc/installation/install_worker_binary)
 to download and install a BladePipe Worker.
 
 ## Example
 Taking a MySQL instance as an example, the following part describes how to 
move data from MySQL to Doris. 
diff --git a/versioned_docs/version-2.1/ecosystem/datax.md 
b/versioned_docs/version-2.1/ecosystem/datax.md
index 6fd96b845bc..3c89ce7f148 100644
--- a/versioned_docs/version-2.1/ecosystem/datax.md
+++ b/versioned_docs/version-2.1/ecosystem/datax.md
@@ -123,7 +123,7 @@ Download the [source 
code](https://github.com/apache/doris/tree/master/extension
 
 * **loadProps**
 
-  - Description: The request parameter of StreamLoad. For details, refer to 
the StreamLoad introduction page. [Stream load - Apache 
Doris](https://doris.apache.org/docs/data-operate/import/stream-load-manual)
+  - Description: The request parameter of StreamLoad. For details, refer to 
the StreamLoad introduction page. [Stream load - Apache 
Doris](https://doris.apache.org/docs/data-operate/import/import-way/stream-load-manual)
 
     This includes the imported data format: format, etc. The imported data 
format defaults to csv, which supports JSON. For details, please refer to the 
type conversion section below, or refer to the official information of Stream 
load above.
 
diff --git a/versioned_docs/version-2.1/ecosystem/dbt-doris-adapter.md 
b/versioned_docs/version-2.1/ecosystem/dbt-doris-adapter.md
index 9987a8f18cc..691bafd7b77 100644
--- a/versioned_docs/version-2.1/ecosystem/dbt-doris-adapter.md
+++ b/versioned_docs/version-2.1/ecosystem/dbt-doris-adapter.md
@@ -217,7 +217,7 @@ The details of the above configuration items are as follows:
 
 ### dbt-doris adapter seed
 
-[`seed`](https://docs.getdbt.com/faqs/seeds/build-one-seed) is a functional 
module used to load data files such as csv. It is a way to load files into the 
library and participate in model building, but there are the following 
precautions:
+[`seed`](https://docs.getdbt.com/docs/build/seeds) is a functional module used 
to load data files such as csv. It is a way to load files into the library and 
participate in model building, but there are the following precautions:
 1. Seeds should not be used to load raw data (for example, large CSV exports 
from a production database).
 2. Since seeds are version controlled, they are best suited to files that 
contain business-specific logic, for example a list of country codes or user 
IDs of employees.
 3. Loading CSVs using dbt's seed functionality is not performant for large 
files. Consider using `streamload` to load these CSVs into doris.
diff --git a/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md 
b/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md
index fedd96ee3ea..49c1c5ffe3e 100644
--- a/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md
+++ b/versioned_docs/version-2.1/ecosystem/doris-kafka-connector.md
@@ -209,7 +209,7 @@ errors.deadletterqueue.topic.replication.factor=1
 | jmx                         | -                                    | true    
                                                                             | 
N            | To obtain connector internal monitoring indicators through JMX, 
please refer to: 
[Doris-Connector-JMX](https://github.com/apache/doris-kafka-connector/blob/master/docs/en/Doris-Connector-JMX.md)
                                                                                
                                              [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load label prefix when importing data. Defaults to the 
Connector application name.                                                     
                                                                                
                                                                                
                  [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | Whether to redirect StreamLoad requests. After being turned on, 
StreamLoad will redirect to the BE where data needs to be written through FE, 
and the BE information will no longer be displayed.       |
-| sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/stream-load-manual)  
<br/><br/> **Enable Group Commit**, for example, enable group commit in 
sync_mode mode: [...]
+| sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/import-way/stream-load-manual)
  <br/><br/> **Enable Group Commit**, for example, enable group commit in sync 
[...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | How to ensure data consistency when consuming Kafka data 
is imported into Doris. Supports `at_least_once` `exactly_once`, default is 
`at_least_once`. Doris needs to be upgraded to 2.1.0 or above to ensure data 
`exactly_once`   |
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | Type conversion mode of upstream data when using Connector to 
consume Kafka data. <br/> ```normal``` means consuming data in Kafka normally 
without any type conversion. <br/> ```debezium_ingestion``` means that when 
Kafka upstream data is collected through CDC (Changelog Data Capture) tools 
such as Debezium, the upstr [...]
 | debezium.schema.evolution   | `none`,<br/> `basic`                 | none    
                                                                             | 
N            | Use Debezium to collect upstream database systems (such as 
MySQL), and when structural changes occur, the added fields can be synchronized 
to Doris. <br/>`none` means that when the structure of the upstream database 
system changes, the changed structure will not be synchronized to Doris. <br/> 
`basic` means synchroniz [...]
diff --git a/versioned_docs/version-3.x/ecosystem/bi/smartbi.md 
b/versioned_docs/version-3.x/ecosystem/bi/smartbi.md
index 6293459c6a3..f14e3ec7e3b 100644
--- a/versioned_docs/version-3.x/ecosystem/bi/smartbi.md
+++ b/versioned_docs/version-3.x/ecosystem/bi/smartbi.md
@@ -12,7 +12,7 @@ Smartbi is a collection of software services and application 
connectors that can
 
 ## Precondition
 
-you can visit  https://www.smartbi.info/download to download and install 
Smartbi.
+you can visit  https://www.smartbi.com.cn to download and install Smartbi.
 
 ## Data connection and application
 
diff --git a/versioned_docs/version-3.x/ecosystem/cloudcanal.md 
b/versioned_docs/version-3.x/ecosystem/cloudcanal.md
index 03b6cd5f2a7..a1b4edd590b 100644
--- a/versioned_docs/version-3.x/ecosystem/cloudcanal.md
+++ b/versioned_docs/version-3.x/ecosystem/cloudcanal.md
@@ -28,7 +28,7 @@ For more functions and parameter settings, please refer to 
[BladePipe Connection
 :::
 
 ## Installation
-Follow the instructions in [Install Worker 
(Docker)](https://doc.bladepipe.com/productOP/docker/install_worker_docker) or 
[Install Worker 
(Binary)](https://doc.bladepipe.com/productOP/binary/install_worker_binary) to 
download and install a BladePipe Worker.
+Follow the instructions in [Install Worker 
(Docker)](https://www.bladepipe.com/docs/productOP/byoc/installation/install_worker_docker)
 or [Install Worker 
(Binary)](https://www.bladepipe.com/docs/productOP/byoc/installation/install_worker_binary)
 to download and install a BladePipe Worker.
 
 ## Example
 Taking a MySQL instance as an example, the following part describes how to 
move data from MySQL to Doris. 
diff --git a/versioned_docs/version-3.x/ecosystem/datax.md 
b/versioned_docs/version-3.x/ecosystem/datax.md
index 6fd96b845bc..3c89ce7f148 100644
--- a/versioned_docs/version-3.x/ecosystem/datax.md
+++ b/versioned_docs/version-3.x/ecosystem/datax.md
@@ -123,7 +123,7 @@ Download the [source 
code](https://github.com/apache/doris/tree/master/extension
 
 * **loadProps**
 
-  - Description: The request parameter of StreamLoad. For details, refer to 
the StreamLoad introduction page. [Stream load - Apache 
Doris](https://doris.apache.org/docs/data-operate/import/stream-load-manual)
+  - Description: The request parameter of StreamLoad. For details, refer to 
the StreamLoad introduction page. [Stream load - Apache 
Doris](https://doris.apache.org/docs/data-operate/import/import-way/stream-load-manual)
 
     This includes the imported data format: format, etc. The imported data 
format defaults to csv, which supports JSON. For details, please refer to the 
type conversion section below, or refer to the official information of Stream 
load above.
 
diff --git a/versioned_docs/version-3.x/ecosystem/dbt-doris-adapter.md 
b/versioned_docs/version-3.x/ecosystem/dbt-doris-adapter.md
index 9987a8f18cc..691bafd7b77 100644
--- a/versioned_docs/version-3.x/ecosystem/dbt-doris-adapter.md
+++ b/versioned_docs/version-3.x/ecosystem/dbt-doris-adapter.md
@@ -217,7 +217,7 @@ The details of the above configuration items are as follows:
 
 ### dbt-doris adapter seed
 
-[`seed`](https://docs.getdbt.com/faqs/seeds/build-one-seed) is a functional 
module used to load data files such as csv. It is a way to load files into the 
library and participate in model building, but there are the following 
precautions:
+[`seed`](https://docs.getdbt.com/docs/build/seeds) is a functional module used 
to load data files such as csv. It is a way to load files into the library and 
participate in model building, but there are the following precautions:
 1. Seeds should not be used to load raw data (for example, large CSV exports 
from a production database).
 2. Since seeds are version controlled, they are best suited to files that 
contain business-specific logic, for example a list of country codes or user 
IDs of employees.
 3. Loading CSVs using dbt's seed functionality is not performant for large 
files. Consider using `streamload` to load these CSVs into doris.
diff --git a/versioned_docs/version-3.x/ecosystem/doris-kafka-connector.md 
b/versioned_docs/version-3.x/ecosystem/doris-kafka-connector.md
index fedd96ee3ea..49c1c5ffe3e 100644
--- a/versioned_docs/version-3.x/ecosystem/doris-kafka-connector.md
+++ b/versioned_docs/version-3.x/ecosystem/doris-kafka-connector.md
@@ -209,7 +209,7 @@ errors.deadletterqueue.topic.replication.factor=1
 | jmx                         | -                                    | true    
                                                                             | 
N            | To obtain connector internal monitoring indicators through JMX, 
please refer to: 
[Doris-Connector-JMX](https://github.com/apache/doris-kafka-connector/blob/master/docs/en/Doris-Connector-JMX.md)
                                                                                
                                              [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load label prefix when importing data. Defaults to the 
Connector application name.                                                     
                                                                                
                                                                                
                  [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | Whether to redirect StreamLoad requests. After being turned on, 
StreamLoad will redirect to the BE where data needs to be written through FE, 
and the BE information will no longer be displayed.       |
-| sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/stream-load-manual)  
<br/><br/> **Enable Group Commit**, for example, enable group commit in 
sync_mode mode: [...]
+| sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/import-way/stream-load-manual)
  <br/><br/> **Enable Group Commit**, for example, enable group commit in sync 
[...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | How to ensure data consistency when consuming Kafka data 
is imported into Doris. Supports `at_least_once` `exactly_once`, default is 
`at_least_once`. Doris needs to be upgraded to 2.1.0 or above to ensure data 
`exactly_once`   |
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | Type conversion mode of upstream data when using Connector to 
consume Kafka data. <br/> ```normal``` means consuming data in Kafka normally 
without any type conversion. <br/> ```debezium_ingestion``` means that when 
Kafka upstream data is collected through CDC (Changelog Data Capture) tools 
such as Debezium, the upstr [...]
 | debezium.schema.evolution   | `none`,<br/> `basic`                 | none    
                                                                             | 
N            | Use Debezium to collect upstream database systems (such as 
MySQL), and when structural changes occur, the added fields can be synchronized 
to Doris. <br/>`none` means that when the structure of the upstream database 
system changes, the changed structure will not be synchronized to Doris. <br/> 
`basic` means synchroniz [...]
diff --git a/versioned_docs/version-4.x/ecosystem/bi/smartbi.md 
b/versioned_docs/version-4.x/ecosystem/bi/smartbi.md
index 6293459c6a3..f14e3ec7e3b 100644
--- a/versioned_docs/version-4.x/ecosystem/bi/smartbi.md
+++ b/versioned_docs/version-4.x/ecosystem/bi/smartbi.md
@@ -12,7 +12,7 @@ Smartbi is a collection of software services and application 
connectors that can
 
 ## Precondition
 
-you can visit  https://www.smartbi.info/download to download and install 
Smartbi.
+you can visit  https://www.smartbi.com.cn to download and install Smartbi.
 
 ## Data connection and application
 
diff --git a/versioned_docs/version-4.x/ecosystem/cloudcanal.md 
b/versioned_docs/version-4.x/ecosystem/cloudcanal.md
index 03b6cd5f2a7..a1b4edd590b 100644
--- a/versioned_docs/version-4.x/ecosystem/cloudcanal.md
+++ b/versioned_docs/version-4.x/ecosystem/cloudcanal.md
@@ -28,7 +28,7 @@ For more functions and parameter settings, please refer to 
[BladePipe Connection
 :::
 
 ## Installation
-Follow the instructions in [Install Worker 
(Docker)](https://doc.bladepipe.com/productOP/docker/install_worker_docker) or 
[Install Worker 
(Binary)](https://doc.bladepipe.com/productOP/binary/install_worker_binary) to 
download and install a BladePipe Worker.
+Follow the instructions in [Install Worker 
(Docker)](https://www.bladepipe.com/docs/productOP/byoc/installation/install_worker_docker)
 or [Install Worker 
(Binary)](https://www.bladepipe.com/docs/productOP/byoc/installation/install_worker_binary)
 to download and install a BladePipe Worker.
 
 ## Example
 Taking a MySQL instance as an example, the following part describes how to 
move data from MySQL to Doris. 
diff --git a/versioned_docs/version-4.x/ecosystem/datax.md 
b/versioned_docs/version-4.x/ecosystem/datax.md
index 6fd96b845bc..3c89ce7f148 100644
--- a/versioned_docs/version-4.x/ecosystem/datax.md
+++ b/versioned_docs/version-4.x/ecosystem/datax.md
@@ -123,7 +123,7 @@ Download the [source 
code](https://github.com/apache/doris/tree/master/extension
 
 * **loadProps**
 
-  - Description: The request parameter of StreamLoad. For details, refer to 
the StreamLoad introduction page. [Stream load - Apache 
Doris](https://doris.apache.org/docs/data-operate/import/stream-load-manual)
+  - Description: The request parameter of StreamLoad. For details, refer to 
the StreamLoad introduction page. [Stream load - Apache 
Doris](https://doris.apache.org/docs/data-operate/import/import-way/stream-load-manual)
 
     This includes the imported data format: format, etc. The imported data 
format defaults to csv, which supports JSON. For details, please refer to the 
type conversion section below, or refer to the official information of Stream 
load above.
 
diff --git a/versioned_docs/version-4.x/ecosystem/dbt-doris-adapter.md 
b/versioned_docs/version-4.x/ecosystem/dbt-doris-adapter.md
index c3049a2b1e5..f544f68a9ef 100644
--- a/versioned_docs/version-4.x/ecosystem/dbt-doris-adapter.md
+++ b/versioned_docs/version-4.x/ecosystem/dbt-doris-adapter.md
@@ -217,7 +217,7 @@ The details of the above configuration items are as follows:
 
 ### dbt-doris adapter seed
 
-[`seed`](https://docs.getdbt.com/faqs/seeds/build-one-seed) is a functional 
module used to load data files such as csv. It is a way to load files into the 
library and participate in model building, but there are the following 
precautions:
+[`seed`](https://docs.getdbt.com/docs/build/seeds) is a functional module used 
to load data files such as csv. It is a way to load files into the library and 
participate in model building, but there are the following precautions:
 1. Seeds should not be used to load raw data (for example, large CSV exports 
from a production database).
 2. Since seeds are version controlled, they are best suited to files that 
contain business-specific logic, for example a list of country codes or user 
IDs of employees.
 3. Loading CSVs using dbt's seed functionality is not performant for large 
files. Consider using `streamload` to load these CSVs into doris.
diff --git a/versioned_docs/version-4.x/ecosystem/doris-kafka-connector.md 
b/versioned_docs/version-4.x/ecosystem/doris-kafka-connector.md
index 5182cde3495..2371d999cc8 100644
--- a/versioned_docs/version-4.x/ecosystem/doris-kafka-connector.md
+++ b/versioned_docs/version-4.x/ecosystem/doris-kafka-connector.md
@@ -209,7 +209,7 @@ errors.deadletterqueue.topic.replication.factor=1
 | jmx                         | -                                    | true    
                                                                             | 
N            | To obtain connector internal monitoring indicators through JMX, 
please refer to: 
[Doris-Connector-JMX](https://github.com/apache/doris-kafka-connector/blob/master/docs/en/Doris-Connector-JMX.md)
                                                                                
                                              [...]
 | label.prefix                | -                                    | ${name} 
                                                                             | 
N            | Stream load label prefix when importing data. Defaults to the 
Connector application name.                                                     
                                                                                
                                                                                
                  [...]
 | auto.redirect               | -                                    | true    
                                                                             | 
N            | Whether to redirect StreamLoad requests. After being turned on, 
StreamLoad will redirect to the BE where data needs to be written through FE, 
and the BE information will no longer be displayed.       |
-| sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/stream-load-manual)  
<br/><br/> **Enable Group Commit**, for example, enable group commit in 
sync_mode mode: [...]
+| sink.properties.*           | -                                    | 
`'sink.properties.format':'json'`, 
<br/>`'sink.properties.read_json_by_line':'true'` | N            | Import 
parameters for Stream Load. <br />For example: define column separator 
`'sink.properties.column_separator':','` <br />Detailed parameter reference 
[here](https://doris.apache.org/docs/data-operate/import/import-way/stream-load-manual)
  <br/><br/> **Enable Group Commit**, for example, enable group commit in sync 
[...]
 | delivery.guarantee          | `at_least_once`,<br/> `exactly_once` | 
at_least_once                                                                   
     | N            | How to ensure data consistency when consuming Kafka data 
is imported into Doris. Supports `at_least_once` `exactly_once`, default is 
`at_least_once`. Doris needs to be upgraded to 2.1.0 or above to ensure data 
`exactly_once`   |
 | converter.mode              | `normal`,<br/> `debezium_ingestion`  | normal  
                                                                             | 
N            | Type conversion mode of upstream data when using Connector to 
consume Kafka data. <br/> ```normal``` means consuming data in Kafka normally 
without any type conversion. <br/> ```debezium_ingestion``` means that when 
Kafka upstream data is collected through CDC (Changelog Data Capture) tools 
such as Debezium, the upstr [...]
 | debezium.schema.evolution   | `none`,<br/> `basic`                 | none    
                                                                             | 
N            | Use Debezium to collect upstream database systems (such as 
MySQL), and when structural changes occur, the added fields can be synchronized 
to Doris. <br/>`none` means that when the structure of the upstream database 
system changes, the changed structure will not be synchronized to Doris. <br/> 
`basic` means synchroniz [...]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to