This is an automated email from the ASF dual-hosted git repository.
morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new 64da1ae67b9 [doc](fix)map "Row" of Paimon to "Struct" of Doris (#577)
64da1ae67b9 is described below
commit 64da1ae67b976612455b3219d5a85c5d5de8776e
Author: 苏小刚 <[email protected]>
AuthorDate: Sun May 5 00:42:06 2024 +0800
[doc](fix)map "Row" of Paimon to "Struct" of Doris (#577)
---
community/developer-guide/pipeline-tracing.md | 4 +--
docs/lakehouse/datalake-analytics/iceberg.md | 17 ++++++++++++
docs/lakehouse/datalake-analytics/paimon.md | 16 +++++------
.../current/developer-guide/pipeline-tracing.md | 4 +--
.../lakehouse/datalake-analytics/iceberg.md | 17 ++++++++++++
.../current/lakehouse/datalake-analytics/paimon.md | 15 +++++-----
.../version-2.0/lakehouse/datalake/paimon.md | 32 +++++++++++++---------
.../lakehouse/datalake-analytics/iceberg.md | 22 +++++++++++++--
.../lakehouse/datalake-analytics/paimon.md | 16 +++++------
.../version-2.0/lakehouse/datalake/paimon.md | 31 +++++++++++++++------
.../lakehouse/datalake-analytics/iceberg.md | 21 ++++++++++++--
.../lakehouse/datalake-analytics/paimon.md | 16 +++++------
12 files changed, 148 insertions(+), 63 deletions(-)
diff --git a/community/developer-guide/pipeline-tracing.md
b/community/developer-guide/pipeline-tracing.md
index a9c62c31315..d291d702c98 100644
--- a/community/developer-guide/pipeline-tracing.md
+++ b/community/developer-guide/pipeline-tracing.md
@@ -72,8 +72,8 @@ to generate a json file that can be displayed. For more
detailed instructions, s
Pipeline Tracing is visualised using [Perfetto](https://ui.perfetto.dev/).
After generating a file in the legal format, select "Open trace file" on its
page to open the file and view the results:
-
+
The tool is very powerful. For example, it is easy to see how the same Task is
scheduled across cores.
-
+
diff --git a/docs/lakehouse/datalake-analytics/iceberg.md
b/docs/lakehouse/datalake-analytics/iceberg.md
index 3c6ea5ce8d4..48d3da31d7c 100644
--- a/docs/lakehouse/datalake-analytics/iceberg.md
+++ b/docs/lakehouse/datalake-analytics/iceberg.md
@@ -216,6 +216,23 @@ The data is stored on Huawei Cloud OBS:
"obs.region" = "cn-north-4"
```
+## Example
+
+```
+-- MinIO & Rest Catalog
+CREATE CATALOG `iceberg` PROPERTIES (
+ "type" = "iceberg",
+ "iceberg.catalog.type" = "rest",
+ "uri" = "http://10.0.0.1:8181",
+ "warehouse" = "s3://bucket",
+ "token" = "token123456",
+ "s3.access_key" = "ak",
+ "s3.secret_key" = "sk",
+ "s3.endpoint" = "http://10.0.0.1:9000",
+ "s3.region" = "us-east-1"
+);
+```
+
## Column type mapping
| Iceberg Type | Doris Type |
diff --git a/docs/lakehouse/datalake-analytics/paimon.md
b/docs/lakehouse/datalake-analytics/paimon.md
index 63ec1f0a3f6..dab57c66221 100644
--- a/docs/lakehouse/datalake-analytics/paimon.md
+++ b/docs/lakehouse/datalake-analytics/paimon.md
@@ -27,24 +27,20 @@ under the License.
# Paimon
-<version since="dev">
-</version>
-
## Instructions for use
1. When data in hdfs,need to put core-site.xml, hdfs-site.xml and
hive-site.xml in the conf directory of FE and BE. First read the hadoop
configuration file in the conf directory, and then read the related to the
environment variable `HADOOP_CONF_DIR` configuration file.
-2. The currently adapted version of the payment is 0.6.0
+2. The currently adapted version of the payment is 0.7.
## Create Catalog
Paimon Catalog Currently supports two types of Metastore creation catalogs:
+
* filesystem(default),Store both metadata and data in the file system.
* hive metastore,It also stores metadata in Hive metastore. Users can access
these tables directly from Hive.
### Creating a Catalog Based on FileSystem
-> For versions 2.0.1 and earlier, please use the following `Create Catalog
based on Hive Metastore`.
-
#### HDFS
```sql
@@ -168,12 +164,13 @@ CREATE CATALOG `paimon_kerberos` PROPERTIES (
| DoubleType | Double |
|
| VarCharType | VarChar |
|
| CharType | Char |
|
+| VarBinaryType, BinaryType | Binary |
|
| DecimalType(precision, scale) | Decimal(precision, scale) |
|
| TimestampType,LocalZonedTimestampType | DateTime |
|
| DateType | Date |
|
-| MapType | Map | Support
Map nesting |
| ArrayType | Array | Support
Array nesting |
-| VarBinaryType, BinaryType | Binary |
|
+| MapType | Map | Support
Map nesting |
+| RowType | Struct | Support
Struct nesting (since 2.0.10 & 2.1.3) |
## FAQ
@@ -188,8 +185,9 @@ CREATE CATALOG `paimon_kerberos` PROPERTIES (
3. When accessing object storage (OSS, S3, etc.), encounter "file system does
not support".
- In versions before 2.0.5 (inclusive), users need to manually download the
following jar package and place it in the
`${DORIS_HOME}/be/lib/java_extensions/preload-extensions` directory, and
restart BE.
+ In versions before 2.0.5 (inclusive), users need to manually download the
following jar package and place it in the
`${DORIS_HOME}/be/lib/java_extensions/preload-extensions` directory, and
restart BE.
- OSS:
[paimon-oss-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-oss/0.6.0-incubating/paimon-oss-0.6.0-incubating.jar)
- Other Object Storage:
[paimon-s3-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-s3/0.6.0-incubating/paimon-s3-0.6.0-incubating.jar)
+ No need to download these jars since 2.0.6.
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/developer-guide/pipeline-tracing.md
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/developer-guide/pipeline-tracing.md
index 93393a970e5..9419e33684d 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/developer-guide/pipeline-tracing.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs-community/current/developer-guide/pipeline-tracing.md
@@ -72,8 +72,8 @@ python3 origin-to-show.py -s <SOURCE_FILE> -d <DEST>.json
Pipeline Tracing 的可视化使用
[Perfetto](https://ui.perfetto.dev/)。生成对应格式的文件后,在其页面上选择 "Open trace file"
打开该文件,即可查看结果:
-
+
该工具的功能非常强大,例如可以方便查看同一个 Task 在各个核间的调度情况。
-
+
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/datalake-analytics/iceberg.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/datalake-analytics/iceberg.md
index 2aed5c8cd93..e2842e60fed 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/datalake-analytics/iceberg.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/datalake-analytics/iceberg.md
@@ -217,6 +217,23 @@ CREATE CATALOG iceberg PROPERTIES (
"obs.region" = "cn-north-4"
```
+## 示例
+
+```
+-- MinIO & Rest Catalog
+CREATE CATALOG `iceberg` PROPERTIES (
+ "type" = "iceberg",
+ "iceberg.catalog.type" = "rest",
+ "uri" = "http://10.0.0.1:8181",
+ "warehouse" = "s3://bucket",
+ "token" = "token123456",
+ "s3.access_key" = "ak",
+ "s3.secret_key" = "sk",
+ "s3.endpoint" = "http://10.0.0.1:9000",
+ "s3.region" = "us-east-1"
+);
+```
+
## 列类型映射
| Iceberg Type | Doris Type |
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/datalake-analytics/paimon.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/datalake-analytics/paimon.md
index 133b85e46c1..7108e6a5bcb 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/datalake-analytics/paimon.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/datalake-analytics/paimon.md
@@ -29,7 +29,7 @@ under the License.
## 使用须知
1. 数据放在 hdfs 时,需要将 core-site.xml,hdfs-site.xml 和 hive-site.xml 放到 FE 和 BE 的
conf 目录下。优先读取 conf 目录下的 hadoop 配置文件,再读取环境变量 `HADOOP_CONF_DIR` 的相关配置文件。
-2. 当前适配的 paimon 版本为 0.6.0
+2. 当前适配的 paimon 版本为 0.7。
## 创建 Catalog
@@ -39,10 +39,6 @@ Paimon Catalog 当前支持两种类型的 Metastore 创建 Catalog:
### 基于 FileSystem 创建 Catalog
-:::tips 提示
-2.0.1 及之前版本,请使用后面的 `基于Hive Metastore创建Catalog`。
-:::
-
#### HDFS
```sql
@@ -167,12 +163,13 @@ CREATE CATALOG `paimon_kerberos` PROPERTIES (
| DoubleType | Double |
|
| VarCharType | VarChar |
|
| CharType | Char |
|
+| VarBinaryType, BinaryType | String |
|
| DecimalType(precision, scale) | Decimal(precision, scale) |
|
| TimestampType,LocalZonedTimestampType | DateTime |
|
| DateType | Date |
|
-| MapType | Map | 支持 Map
嵌套 |
-| ArrayType | Array | 支持 Array
嵌套 |
-| VarBinaryType, BinaryType | Binary |
|
+| ArrayType | Array |
支持Array嵌套 |
+| MapType | Map | 支持Map嵌套
|
+| RowType | Struct |
支持Struct嵌套(2.0.10 和 2.1.3 版本开始支持)|
## 常见问题
@@ -192,4 +189,6 @@ CREATE CATALOG `paimon_kerberos` PROPERTIES (
- 访问
OSS:[paimon-oss-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-oss/0.6.0-incubating/paimon-oss-0.6.0-incubating.jar)
-
访问其他对象存储:[paimon-s3-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-s3/0.6.0-incubating/paimon-s3-0.6.0-incubating.jar)
+ 2.0.6 之后的版本不再需要用户手动放置。
+
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/lakehouse/datalake/paimon.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/lakehouse/datalake/paimon.md
index 56d63310669..44aeb435143 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/lakehouse/datalake/paimon.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/lakehouse/datalake/paimon.md
@@ -30,7 +30,7 @@ under the License.
1. 数据放在 hdfs 时,需要将 core-site.xml,hdfs-site.xml 和 hive-site.xml 放到 FE 和 BE 的
conf 目录下。优先读取 conf 目录下的 hadoop 配置文件,再读取环境变量 `HADOOP_CONF_DIR` 的相关配置文件。
-2. 当前适配的 paimon 版本为 0.6.0
+2. 当前最新适配的 paimon 版本为 0.7。
## 创建 Catalog
@@ -42,10 +42,6 @@ Paimon Catalog 当前支持两种类型的 Metastore 创建 Catalog:
### 基于 FileSystem 创建 Catalog
-:::tip
-2.0.1 及之前版本,请使用后面的 `基于 Hive Metastore 创建 Catalog`。
-:::
-
**HDFS**
```sql
@@ -162,23 +158,33 @@ CREATE CATALOG `paimon_hms` PROPERTIES (
| DoubleType | Double |
|
| VarCharType | VarChar |
|
| CharType | Char |
|
+| VarBinaryType, BinaryType | Binary |
|
| DecimalType(precision, scale) | Decimal(precision, scale) |
|
| TimestampType,LocalZonedTimestampType | DateTime |
|
| DateType | Date |
|
-| MapType | Map | 支持 Map
嵌套 |
| ArrayType | Array | 支持 Array
嵌套 |
-| VarBinaryType, BinaryType | Binary |
|
+| MapType | Map | 支持 Map
嵌套 |
+| RowType | Struct |
支持Struct嵌套(2.0.10 版本开始支持)|
-:::caution
-访问对象存储(OSS、S3 等)报错文件系统不支持
+## 常见问题
-在 2.0.5(含)之前的版本,用户需手动下载以下 jar 包并放置在
`${DORIS_HOME}/be/lib/java_extensions/preload-extensions` 目录下,重启 BE。
+1. Kerberos 问题
-- 访问
OSS:[paimon-oss-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-oss/0.6.0-incubating/paimon-oss-0.6.0-incubating.jar)
+ - 确保 principal 和 keytab 配置正确。
+ - 需在 BE 节点启动定时任务(如 crontab),每隔一定时间(如 12 小时),执行一次 `kinit -kt your_principal
your_keytab` 命令。
--
访问其他对象存储:[paimon-s3-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-s3/0.6.0-incubating/paimon-s3-0.6.0-incubating.jar)
+2. Unknown type value: UNSUPPORTED
-:::
+ 这是 Doris 2.0.2 版本和 Paimon 0.5 版本的一个兼容性问题,需要升级到 2.0.3 或更高版本解决,或自行
[patch](https://github.com/apache/doris/pull/24985)
+
+3. 访问对象存储(OSS、S3 等)报错文件系统不支持
+
+ 在 2.0.5(含)之前的版本,用户需手动下载以下 jar 包并放置在
`${DORIS_HOME}/be/lib/java_extensions/preload-extensions` 目录下,重启 BE。
+
+ - 访问
OSS:[paimon-oss-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-oss/0.6.0-incubating/paimon-oss-0.6.0-incubating.jar)
+ -
访问其他对象存储:[paimon-s3-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-s3/0.6.0-incubating/paimon-s3-0.6.0-incubating.jar)
+
+ 2.0.6 之后的版本不再需要用户手动放置。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/datalake-analytics/iceberg.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/datalake-analytics/iceberg.md
index ee5fd5c2aac..e2842e60fed 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/datalake-analytics/iceberg.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/datalake-analytics/iceberg.md
@@ -35,6 +35,7 @@ under the License.
4. 支持 Parquet 文件格式
5. 2.1.3 版本开始支持 ORC 文件格式。
+
## 创建 Catalog
### 基于Hive Metastore创建Catalog
@@ -216,6 +217,23 @@ CREATE CATALOG iceberg PROPERTIES (
"obs.region" = "cn-north-4"
```
+## 示例
+
+```
+-- MinIO & Rest Catalog
+CREATE CATALOG `iceberg` PROPERTIES (
+ "type" = "iceberg",
+ "iceberg.catalog.type" = "rest",
+ "uri" = "http://10.0.0.1:8181",
+ "warehouse" = "s3://bucket",
+ "token" = "token123456",
+ "s3.access_key" = "ak",
+ "s3.secret_key" = "sk",
+ "s3.endpoint" = "http://10.0.0.1:9000",
+ "s3.region" = "us-east-1"
+);
+```
+
## 列类型映射
| Iceberg Type | Doris Type |
@@ -233,8 +251,8 @@ CREATE CATALOG iceberg PROPERTIES (
| string | string |
| fixed(L) | char(L) |
| binary | string |
-| struct | struct (2.1.3 版本开始支持) |
-| map | map (2.1.3 版本开始支持) |
+| struct | struct (2.1.3 版本开始支持) |
+| map | map (2.1.3 版本开始支持) |
| list | array |
| time | 不支持 |
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/datalake-analytics/paimon.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/datalake-analytics/paimon.md
index 58615fd6926..7108e6a5bcb 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/datalake-analytics/paimon.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/lakehouse/datalake-analytics/paimon.md
@@ -25,10 +25,11 @@ under the License.
-->
+
## 使用须知
1. 数据放在 hdfs 时,需要将 core-site.xml,hdfs-site.xml 和 hive-site.xml 放到 FE 和 BE 的
conf 目录下。优先读取 conf 目录下的 hadoop 配置文件,再读取环境变量 `HADOOP_CONF_DIR` 的相关配置文件。
-2. 当前适配的 paimon 版本为 0.6.0
+2. 当前适配的 paimon 版本为 0.7。
## 创建 Catalog
@@ -38,10 +39,6 @@ Paimon Catalog 当前支持两种类型的 Metastore 创建 Catalog:
### 基于 FileSystem 创建 Catalog
-:::tips 提示
-2.0.1 及之前版本,请使用后面的 `基于Hive Metastore创建Catalog`。
-:::
-
#### HDFS
```sql
@@ -166,12 +163,13 @@ CREATE CATALOG `paimon_kerberos` PROPERTIES (
| DoubleType | Double |
|
| VarCharType | VarChar |
|
| CharType | Char |
|
+| VarBinaryType, BinaryType | String |
|
| DecimalType(precision, scale) | Decimal(precision, scale) |
|
| TimestampType,LocalZonedTimestampType | DateTime |
|
| DateType | Date |
|
-| MapType | Map | 支持 Map
嵌套 |
-| ArrayType | Array | 支持 Array
嵌套 |
-| VarBinaryType, BinaryType | Binary |
|
+| ArrayType | Array |
支持Array嵌套 |
+| MapType | Map | 支持Map嵌套
|
+| RowType | Struct |
支持Struct嵌套(2.0.10 和 2.1.3 版本开始支持)|
## 常见问题
@@ -191,4 +189,6 @@ CREATE CATALOG `paimon_kerberos` PROPERTIES (
- 访问
OSS:[paimon-oss-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-oss/0.6.0-incubating/paimon-oss-0.6.0-incubating.jar)
-
访问其他对象存储:[paimon-s3-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-s3/0.6.0-incubating/paimon-s3-0.6.0-incubating.jar)
+ 2.0.6 之后的版本不再需要用户手动放置。
+
diff --git a/versioned_docs/version-2.0/lakehouse/datalake/paimon.md
b/versioned_docs/version-2.0/lakehouse/datalake/paimon.md
index 38d2c2a569e..9c76aaccdfa 100644
--- a/versioned_docs/version-2.0/lakehouse/datalake/paimon.md
+++ b/versioned_docs/version-2.0/lakehouse/datalake/paimon.md
@@ -27,13 +27,10 @@ under the License.
# Paimon
-<version since="dev">
-</version>
-
## Instructions for use
1. When data in hdfs,need to put core-site.xml, hdfs-site.xml and
hive-site.xml in the conf directory of FE and BE. First read the hadoop
configuration file in the conf directory, and then read the related to the
environment variable `HADOOP_CONF_DIR` configuration file.
-2. The currently adapted version of the payment is 0.6.0
+2. The currently adapted version of the payment is 0.7.
## Create Catalog
@@ -43,8 +40,6 @@ Paimon Catalog Currently supports two types of Metastore
creation catalogs:
### Creating a Catalog Based on FileSystem
-> For versions 2.0.1 and earlier, please use the following `Create Catalog
based on Hive Metastore`.
-
#### HDFS
```sql
CREATE CATALOG `paimon_hdfs` PROPERTIES (
@@ -155,10 +150,30 @@ CREATE CATALOG `paimon_hms` PROPERTIES (
| DoubleType | Double |
|
| VarCharType | VarChar |
|
| CharType | Char |
|
+| VarBinaryType, BinaryType | Binary |
|
| DecimalType(precision, scale) | Decimal(precision, scale) |
|
| TimestampType,LocalZonedTimestampType | DateTime |
|
| DateType | Date |
|
-| MapType | Map | Support
Map nesting |
| ArrayType | Array | Support
Array nesting |
-| VarBinaryType, BinaryType | Binary |
|
+| MapType | Map | Support
Map nesting |
+| RowType | Struct | Support
Struct nesting (since 2.0.10 & 2.1.3) |
+
+## FAQ
+
+1. Kerberos
+
+ - Make sure principal and keytab are correct.
+ - You need to start a scheduled task (such as crontab) on the BE node, and
execute the `kinit -kt your_principal your_keytab` command every certain time
(such as 12 hours).
+
+2. Unknown type value: UNSUPPORTED
+
+ This is a compatible issue exist in 2.0.2 with Paimon 0.5, you need to
upgrade to 2.0.3 or higher to solve this problem. Or
[patch](https://github.com/apache/doris/pull/24985) yourself.
+
+3. When accessing object storage (OSS, S3, etc.), encounter "file system does
not support".
+
+ In versions before 2.0.5 (inclusive), users need to manually download the
following jar package and place it in the
`${DORIS_HOME}/be/lib/java_extensions/preload-extensions` directory, and
restart BE.
+
+ - OSS:
[paimon-oss-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-oss/0.6.0-incubating/paimon-oss-0.6.0-incubating.jar)
+ - Other Object Storage:
[paimon-s3-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-s3/0.6.0-incubating/paimon-s3-0.6.0-incubating.jar)
+ No need to download these jars since 2.0.6.
diff --git a/versioned_docs/version-2.1/lakehouse/datalake-analytics/iceberg.md
b/versioned_docs/version-2.1/lakehouse/datalake-analytics/iceberg.md
index 8da01d6b9aa..48d3da31d7c 100644
--- a/versioned_docs/version-2.1/lakehouse/datalake-analytics/iceberg.md
+++ b/versioned_docs/version-2.1/lakehouse/datalake-analytics/iceberg.md
@@ -216,6 +216,23 @@ The data is stored on Huawei Cloud OBS:
"obs.region" = "cn-north-4"
```
+## Example
+
+```
+-- MinIO & Rest Catalog
+CREATE CATALOG `iceberg` PROPERTIES (
+ "type" = "iceberg",
+ "iceberg.catalog.type" = "rest",
+ "uri" = "http://10.0.0.1:8181",
+ "warehouse" = "s3://bucket",
+ "token" = "token123456",
+ "s3.access_key" = "ak",
+ "s3.secret_key" = "sk",
+ "s3.endpoint" = "http://10.0.0.1:9000",
+ "s3.region" = "us-east-1"
+);
+```
+
## Column type mapping
| Iceberg Type | Doris Type |
@@ -233,8 +250,8 @@ The data is stored on Huawei Cloud OBS:
| string | string |
| fixed(L) | char(L) |
| binary | string |
-| struct | struct (since 2.1.3) |
-| map | map (since 2.1.3) |
+| struct | struct (since 2.1.3) |
+| map | map (since 2.1.3) |
| list | array |
| time | unsupported |
diff --git a/versioned_docs/version-2.1/lakehouse/datalake-analytics/paimon.md
b/versioned_docs/version-2.1/lakehouse/datalake-analytics/paimon.md
index 63ec1f0a3f6..dab57c66221 100644
--- a/versioned_docs/version-2.1/lakehouse/datalake-analytics/paimon.md
+++ b/versioned_docs/version-2.1/lakehouse/datalake-analytics/paimon.md
@@ -27,24 +27,20 @@ under the License.
# Paimon
-<version since="dev">
-</version>
-
## Instructions for use
1. When data in hdfs,need to put core-site.xml, hdfs-site.xml and
hive-site.xml in the conf directory of FE and BE. First read the hadoop
configuration file in the conf directory, and then read the related to the
environment variable `HADOOP_CONF_DIR` configuration file.
-2. The currently adapted version of the payment is 0.6.0
+2. The currently adapted version of the payment is 0.7.
## Create Catalog
Paimon Catalog Currently supports two types of Metastore creation catalogs:
+
* filesystem(default),Store both metadata and data in the file system.
* hive metastore,It also stores metadata in Hive metastore. Users can access
these tables directly from Hive.
### Creating a Catalog Based on FileSystem
-> For versions 2.0.1 and earlier, please use the following `Create Catalog
based on Hive Metastore`.
-
#### HDFS
```sql
@@ -168,12 +164,13 @@ CREATE CATALOG `paimon_kerberos` PROPERTIES (
| DoubleType | Double |
|
| VarCharType | VarChar |
|
| CharType | Char |
|
+| VarBinaryType, BinaryType | Binary |
|
| DecimalType(precision, scale) | Decimal(precision, scale) |
|
| TimestampType,LocalZonedTimestampType | DateTime |
|
| DateType | Date |
|
-| MapType | Map | Support
Map nesting |
| ArrayType | Array | Support
Array nesting |
-| VarBinaryType, BinaryType | Binary |
|
+| MapType | Map | Support
Map nesting |
+| RowType | Struct | Support
Struct nesting (since 2.0.10 & 2.1.3) |
## FAQ
@@ -188,8 +185,9 @@ CREATE CATALOG `paimon_kerberos` PROPERTIES (
3. When accessing object storage (OSS, S3, etc.), encounter "file system does
not support".
- In versions before 2.0.5 (inclusive), users need to manually download the
following jar package and place it in the
`${DORIS_HOME}/be/lib/java_extensions/preload-extensions` directory, and
restart BE.
+ In versions before 2.0.5 (inclusive), users need to manually download the
following jar package and place it in the
`${DORIS_HOME}/be/lib/java_extensions/preload-extensions` directory, and
restart BE.
- OSS:
[paimon-oss-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-oss/0.6.0-incubating/paimon-oss-0.6.0-incubating.jar)
- Other Object Storage:
[paimon-s3-0.6.0-incubating.jar](https://repo.maven.apache.org/maven2/org/apache/paimon/paimon-s3/0.6.0-incubating/paimon-s3-0.6.0-incubating.jar)
+ No need to download these jars since 2.0.6.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]