zhilinli123 commented on code in PR #5335:
URL: https://github.com/apache/seatunnel/pull/5335#discussion_r1299150549
##########
docs/en/connector-v2/source/Iceberg.md:
##########
@@ -22,126 +28,112 @@ Source connector for Apache Iceberg. It can support batch
and stream mode.
- [x] hadoop(2.7.1 , 2.7.5 , 3.1.3)
- [x] hive(2.3.9 , 3.1.2)
-## Options
-
-| name | type | required | default value |
-|--------------------------|---------|----------|----------------------|
-| catalog_name | string | yes | - |
-| catalog_type | string | yes | - |
-| uri | string | no | - |
-| warehouse | string | yes | - |
-| namespace | string | yes | - |
-| table | string | yes | - |
-| schema | config | no | - |
-| case_sensitive | boolean | no | false |
-| start_snapshot_timestamp | long | no | - |
-| start_snapshot_id | long | no | - |
-| end_snapshot_id | long | no | - |
-| use_snapshot_id | long | no | - |
-| use_snapshot_timestamp | long | no | - |
-| stream_scan_strategy | enum | no | FROM_LATEST_SNAPSHOT |
-| common-options | | no | - |
-
-### catalog_name [string]
-
-User-specified catalog name.
-
-### catalog_type [string]
-
-The optional values are:
-- hive: The hive metastore catalog.
-- hadoop: The hadoop catalog.
-
-### uri [string]
-
-The Hive metastore’s thrift URI.
-
-### warehouse [string]
-
-The location to store metadata files and data files.
-
-### namespace [string]
-
-The iceberg database name in the backend catalog.
-
-### table [string]
-
-The iceberg table name in the backend catalog.
-
-### case_sensitive [boolean]
-
-If data columns where selected via schema [config], controls whether the match
to the schema will be done with case sensitivity.
-
-### schema [config]
+## Description
-#### fields [Config]
+Source connector for Apache Iceberg. It can support batch and stream mode.
-Use projection to select data columns and columns order.
+## Database Dependency
-e.g.
+> In order to be compatible with different versions of Hadoop and Hive, the
scope of hive-exec and flink-shaded-hadoop-2 in the project pom file are
provided, so if you use the Flink engine, first you may need to add the
following Jar packages to <FLINK_HOME>/lib directory, if you are using the
Spark engine and integrated with Hadoop, then you do not need to add the
following Jar packages.
```
-schema {
- fields {
- f2 = "boolean"
- f1 = "bigint"
- f3 = "int"
- f4 = "bigint"
- }
-}
+flink-shaded-hadoop-x-xxx.jar
Review Comment:
https://github.com/apache/seatunnel/blob/dev/docs/en/connector-v2/source/Mysql.md

Refer to the current documentation to add a jar dependent address boot for
the user
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]