This is an automated email from the ASF dual-hosted git repository.

jinsongzhou pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/amoro.git


The following commit(s) were added to refs/heads/master by this push:
     new c0c11fec0 fix some doc error (#3551)
c0c11fec0 is described below

commit c0c11fec0abf2eedf325f9a369210f0c845fef3e
Author: Wang Tao <[email protected]>
AuthorDate: Wed May 14 10:59:05 2025 +0800

    fix some doc error (#3551)
    
    fix#keep same with amoro-site repo
---
 docs/_index.md                                |  2 +-
 docs/admin-guides/deployment-on-kubernetes.md | 15 +--------------
 docs/admin-guides/deployment.md               |  3 +--
 docs/admin-guides/managing-optimizers.md      | 10 ++--------
 docs/engines/flink/flink-cdc-ingestion.md     |  4 ++--
 docs/engines/flink/flink-dml.md               | 14 +++++++-------
 docs/engines/flink/flink-get-started.md       |  2 +-
 docs/user-guides/cdc-ingestion.md             |  4 ++--
 8 files changed, 17 insertions(+), 37 deletions(-)

diff --git a/docs/_index.md b/docs/_index.md
index 885822689..b219e33d1 100644
--- a/docs/_index.md
+++ b/docs/_index.md
@@ -70,7 +70,7 @@ Amoro support multiple processing engines for Mixed format as 
below:
 | Processing Engine | Version                   | Batch Read  | Batch Write | 
Batch Overwrite | Streaming Read | Streaming Write | Create Table | Alter Table 
|
 
|-------------------|---------------------------|-------------|-------------|-----------------|----------------|-----------------|--------------|-------------|
 | Flink             | 1.15.x, 1.16.x and 1.17.x |  &#x2714;   |   &#x2714;   | 
      &#x2716;   |      &#x2714;   |       &#x2714;   |    &#x2714;   |   
&#x2716;   |
-| Spark             | 3.1, 3.2, 3.3             |  &#x2714;   |   &#x2714;   | 
      &#x2714;   |      &#x2716;   |       &#x2716;   |    &#x2714;   |   
&#x2714;   |
+| Spark             | 3.2, 3.3, 3.5             |  &#x2714;   |   &#x2714;   | 
      &#x2714;   |      &#x2716;   |       &#x2716;   |    &#x2714;   |   
&#x2714;   |
 | Hive              | 2.x, 3.x                  |  &#x2714;  |   &#x2716;  |   
    &#x2714;  |      &#x2716;  |       &#x2716;  |    &#x2716;  |   &#x2714;  |
 | Trino             | 406                       |  &#x2714;  |   &#x2716;  |   
    &#x2714;  |      &#x2716;  |       &#x2716;  |    &#x2716;  |   &#x2714;  |
 
diff --git a/docs/admin-guides/deployment-on-kubernetes.md 
b/docs/admin-guides/deployment-on-kubernetes.md
index 1211bdeb2..97f7951f2 100644
--- a/docs/admin-guides/deployment-on-kubernetes.md
+++ b/docs/admin-guides/deployment-on-kubernetes.md
@@ -74,20 +74,7 @@ or build the `amoro-spark-optimizer` image by:
 ```
 
 ## Get Helm Charts
-
-You can obtain the latest official release chart by adding the official Helm 
repository.
-
-```shell
-$ helm repo add amoro https://netease.github.io/amoro/charts
-$ helm search repo amoro 
-NAME           CHART VERSION    APP VERSION        DESCRIPTION           
-amoro/amoro    0.1.0            0.7.0              A Helm chart for Amoro 
-
-$ helm pull amoro/amoro 
-$ tar zxvf amoro-*.tgz
-```
-
-Alternatively, you can find the latest charts directly from the Github source 
code.
+You can find the latest charts directly from the Github source code.
 
 ```shell
 $ git clone https://github.com/apache/amoro.git
diff --git a/docs/admin-guides/deployment.md b/docs/admin-guides/deployment.md
index 2a7a2e1ae..779e5d256 100644
--- a/docs/admin-guides/deployment.md
+++ b/docs/admin-guides/deployment.md
@@ -31,8 +31,7 @@ You can choose to download the stable release package from 
[download page](../..
 ## System requirements
 
 - Java 8 is required.
-- Optional: MySQL 5.5 or higher
-- Optional: PostgreSQL 14.x or higher
+- Optional: A RDBMS (PostgreSQL 14.x or higher, MySQL 5.5 or higher)
 - Optional: ZooKeeper 3.4.x or higher
 
 ## Download the distribution
diff --git a/docs/admin-guides/managing-optimizers.md 
b/docs/admin-guides/managing-optimizers.md
index f168579c6..c289bfa13 100644
--- a/docs/admin-guides/managing-optimizers.md
+++ b/docs/admin-guides/managing-optimizers.md
@@ -338,10 +338,7 @@ You can submit optimizer in your own Flink task 
development platform or local Fl
  ${AMORO_HOME}/plugin/optimizer/flink/optimizer-job.jar \
  -a thrift://127.0.0.1:1261 \
  -g flinkGroup \
- -p 1 \
- -eds \
- -dsp /tmp \
- -msz 512
+ -p 1
 ```
 The description of the relevant parameters is shown in the following table:
 
@@ -368,10 +365,7 @@ Or you can submit optimizer in your own Spark task 
development platform or local
  ${AMORO_HOME}/plugin/optimizer/spark/optimizer-job.jar \
  -a thrift://127.0.0.1:1261 \
  -g sparkGroup \
- -p 1 \
- -eds \
- -dsp /tmp \
- -msz 512
+ -p 1
 ```
 The description of the relevant parameters is shown in the following table:
 
diff --git a/docs/engines/flink/flink-cdc-ingestion.md 
b/docs/engines/flink/flink-cdc-ingestion.md
index f1763169c..c79e311c8 100644
--- a/docs/engines/flink/flink-cdc-ingestion.md
+++ b/docs/engines/flink/flink-cdc-ingestion.md
@@ -2,10 +2,10 @@
 title: "Flink CDC Ingestion"
 url: flink-cdc-ingestion
 aliases:
-  - "flink/cdc"
+  - "flink/cdc-ingestion"
 menu:
     main:
-        parent: User Guides
+        parent: Flink
         weight: 400
 ---
 <!--
diff --git a/docs/engines/flink/flink-dml.md b/docs/engines/flink/flink-dml.md
index dc15b672d..bdc9f7f21 100644
--- a/docs/engines/flink/flink-dml.md
+++ b/docs/engines/flink/flink-dml.md
@@ -131,13 +131,13 @@ SELECT * FROM unkeyed /*+ 
OPTIONS('monitor-interval'='1s')*/ ;
 ```
 Hint Options
 
-| Key                              | Default Value | Type     | Required | 
Description                                                                     
                                                                                
                                                                                
                                                                                
                                                                                
        |
-|----------------------------------|---------------|----------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| streaming                        | true          | Boolean  | No       | 
Reads bounded data or unbounded data in a streaming mode, false: reads bounded 
data, true: reads unbounded data                                                
                                                                                
                                                                                
                                                                                
         |
-| mixed-format.read.mode                 | file          | String   | No       
| To specify the type of data to read from an Amoro table, either File or Log, 
use the mixed-format.read.mode parameter. If the value is set to log, the Log 
configuration must be enabled.                                                  
                                                                                
                                                                                
                  |
-| monitor-interval<img width=120/> | 10s           | Duration | No       | The 
mixed-format.read.mode = file parameter needs to be set for this to take 
effect. The time interval for monitoring newly added data files                 
                                                                                
                                                                                
                                                                                
                 |
-| start-snapshot-id                | (none)        | Long     | No       | To 
read incremental data starting from a specified snapshot (excluding the data in 
the start-snapshot-id snapshot), specify the snapshot ID using the 
start-snapshot-id parameter. If not specified, the reader will start reading 
from the snapshot after the current one (excluding the data in the current 
snapshot).                                                                      
                          |
-| other table parameters           | (none)        | String   | No       | All 
parameters of an Amoro table can be dynamically modified through SQL Hints, but 
they only take effect for this specific task. For the specific parameter list, 
please refer to the [Table Configuration](../configurations/). For 
permissions-related configurations on the catalog, they can also be configured 
in Hint using parameters such as [properties.auth.XXX in catalog 
DDL](./flink-ddl.md#Flink SQL) |
+| Key                              | Default Value | Type     | Required | 
Description                                                                     
                                                                                
                                                                                
                                                                                
                                                                                
   |
+|----------------------------------|---------------|----------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| streaming                        | true          | Boolean  | No       | 
Reads bounded data or unbounded data in a streaming mode, false: reads bounded 
data, true: reads unbounded data                                                
                                                                                
                                                                                
                                                                                
    |
+| mixed-format.read.mode                 | file          | String   | No       
| To specify the type of data to read from an Amoro table, either File or Log, 
use the mixed-format.read.mode parameter. If the value is set to log, the Log 
configuration must be enabled.                                                  
                                                                                
                                                                                
        |
+| monitor-interval<img width=120/> | 10s           | Duration | No       | The 
mixed-format.read.mode = file parameter needs to be set for this to take 
effect. The time interval for monitoring newly added data files                 
                                                                                
                                                                                
                                                                                
      |
+| start-snapshot-id                | (none)        | Long     | No       | To 
read incremental data starting from a specified snapshot (excluding the data in 
the start-snapshot-id snapshot), specify the snapshot ID using the 
start-snapshot-id parameter. If not specified, the reader will start reading 
from the snapshot after the current one (excluding the data in the current 
snapshot).                                                                      
                     |
+| other table parameters           | (none)        | String   | No       | All 
parameters of an Amoro table can be dynamically modified through SQL Hints, but 
they only take effect for this specific task. For the specific parameter list, 
please refer to the [Table Configuration](../configurations/). For 
permissions-related configurations on the catalog, they can also be configured 
in Hint using parameters such as [properties.auth.XXX in catalog 
DDL](../flink-ddl/#flink-sql) |
 
 ### Streaming Mode (FileStore primary key table)
 
diff --git a/docs/engines/flink/flink-get-started.md 
b/docs/engines/flink/flink-get-started.md
index e47b51e11..0a28c402f 100644
--- a/docs/engines/flink/flink-get-started.md
+++ b/docs/engines/flink/flink-get-started.md
@@ -62,7 +62,7 @@ The Flink Runtime Jar is located in the 
`amoro-format-mixed/amoro-format-mixed-f
 Download Flink and related dependencies, and download Flink 1.15/1.16/1.17 as 
needed. Taking Flink 1.15 as an example:
 ```shell
 # Replace version value with the latest Amoro version if needed
-AMORO_VERSION=0.7.0-incubating
+AMORO_VERSION=0.8.0-incubating
 FLINK_VERSION=1.15.3
 FLINK_MAJOR_VERSION=1.15
 FLINK_HADOOP_SHADE_VERSION=2.7.5
diff --git a/docs/user-guides/cdc-ingestion.md 
b/docs/user-guides/cdc-ingestion.md
index 23f00a4d0..280af8353 100644
--- a/docs/user-guides/cdc-ingestion.md
+++ b/docs/user-guides/cdc-ingestion.md
@@ -39,9 +39,9 @@ tool for real time data and batch data. Flink CDC brings the
 simplicity and elegance of data integration via YAML to describe the data 
movement and transformation.
 
 Amoro provides the relevant code case reference how to complete cdc data to 
different lakehouse table format, see 
-[**flink-cdc-ingestion**](../engines/flink/flink-cdc-ingestion.md) doc
+[**flink-cdc-ingestion**](../flink-cdc-ingestion) doc
 
-At the same time, we provide [**Mixed-Iceberg**](../formats/mixed-iceberg.md)  
format, which you can understand as 
+At the same time, we provide [**Mixed-Iceberg**](../iceberg-format)  format, 
which you can understand as 
 **STREAMING** For iceberg, which will enhance your real-time processing scene 
for you
 
 ## Debezium

Reply via email to