This is an automated email from the ASF dual-hosted git repository.
morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new 0dd8383c041 [lakhouse] add and modify properties doc (#2166)
0dd8383c041 is described below
commit 0dd8383c041c014225b488e3119d9fdf81d36f8a
Author: Mingyu Chen (Rayner) <[email protected]>
AuthorDate: Mon Mar 10 12:48:55 2025 +0800
[lakhouse] add and modify properties doc (#2166)
## Versions
- [x] dev
- [ ] 3.0
- [ ] 2.1
- [ ] 2.0
## Languages
- [x] Chinese
- [x] English
## Docs Checklist
- [ ] Checked by AI
- [ ] Test Cases Built
---
docs/lakehouse/catalogs/hudi-catalog.md | 6 ++
docs/lakehouse/catalogs/maxcompute-catalog.md | 3 +-
docs/lakehouse/metastores/hive-metastore.md | 86 +++++++++++++++-
docs/lakehouse/metastores/iceberg-rest.md | 9 +-
docs/lakehouse/storages/aliyun-oss.md | 46 ++++++++-
docs/lakehouse/storages/hdfs.md | 89 ++++++++++++++++-
docs/lakehouse/storages/huawei-obs.md | 45 ++++++++-
docs/lakehouse/storages/tencent-cos.md | 48 ++++++++-
.../current/lakehouse/catalogs/hudi-catalog.md | 6 ++
.../lakehouse/catalogs/maxcompute-catalog.md | 3 +-
.../current/lakehouse/metastores/hive-metastore.md | 109 ++++++++++++---------
.../current/lakehouse/metastores/iceberg-rest.md | 4 +-
.../current/lakehouse/storages/aliyun-oss.md | 12 +--
.../current/lakehouse/storages/hdfs.md | 60 +++++++-----
.../current/lakehouse/storages/huawei-obs.md | 18 ++--
.../current/lakehouse/storages/tencent-cos.md | 18 ++--
16 files changed, 442 insertions(+), 120 deletions(-)
diff --git a/docs/lakehouse/catalogs/hudi-catalog.md
b/docs/lakehouse/catalogs/hudi-catalog.md
index 99b5bcd82da..48c91d29aa9 100644
--- a/docs/lakehouse/catalogs/hudi-catalog.md
+++ b/docs/lakehouse/catalogs/hudi-catalog.md
@@ -222,6 +222,12 @@ By using `desc` to view the execution plan, you can see
that Doris converts `@in
| inputSplitNum=1, totalFileSize=13099711, scanRanges=1
```
+## FAQ
+
+1. Query blocked when using Java SKD to read incremental data through JNI
+
+ Please add `-Djol.skipHotspotSAAttach=true` to `JAVA_OPTS_FOR_JDK_17` or
`JAVA_OPTS` in `be.conf`.
+
## Appendix
### Change Log
diff --git a/docs/lakehouse/catalogs/maxcompute-catalog.md
b/docs/lakehouse/catalogs/maxcompute-catalog.md
index 1fbe434289d..b9f9aadb278 100644
--- a/docs/lakehouse/catalogs/maxcompute-catalog.md
+++ b/docs/lakehouse/catalogs/maxcompute-catalog.md
@@ -78,7 +78,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
| `mc.connect_timeout` | `10s` | Timeout for connecting to
MaxCompute. | 2.1.8 and later |
| `mc.read_timeout` | `120s` | Timeout for reading from
MaxCompute. | 2.1.8 and later |
| `mc.retry_count` | `4` | Number of retries after a
timeout. | 2.1.8 and later |
- | `mc.datetime_predicate_push_down` | `true` | Whether to allow pushdown of
predicate conditions of `timestamp/timestamp_ntz` types. Doris will lose
precision (9 -> 6) when synchronizing these two types. Therefore, if the
original data has a precision higher than 6 digits, condition pushdown may lead
to inaccurate results. | 2.1.9 and later |
+ | `mc.datetime_predicate_push_down` | `true` | Whether to allow pushdown of
predicate conditions of `timestamp/timestamp_ntz` types. Doris will lose
precision (9 -> 6) when synchronizing these two types. Therefore, if the
original data has a precision higher than 6 digits, condition pushdown may lead
to inaccurate results. | 2.1.9/3.0.5 and later |
* `{CommonProperties}`
@@ -113,6 +113,7 @@ Only the public cloud version of MaxCompute is supported.
For support with the p
| date | date |
|
| datetime | datetime(3) | Fixed mapping to precision 3. You can
specify the time zone using `SET [GLOBAL] time_zone = 'Asia/Shanghai'`. |
| timestamp_ntz | datetime(6) | The precision of MaxCompute's
`timestamp_ntz` is 9, but Doris' DATETIME supports a maximum precision of 6.
Therefore, the extra part will be directly truncated when reading data. |
+| timestamp | datetime(6) | Since 2.1.9 & 3.0.5. The precision of
MaxCompute's `timestamp` is 9, but Doris' DATETIME supports a maximum precision
of 6. Therefore, the extra part will be directly truncated when reading data. |
| array | array |
|
| map | map |
|
| struct | struct |
|
diff --git a/docs/lakehouse/metastores/hive-metastore.md
b/docs/lakehouse/metastores/hive-metastore.md
index 06aecaad309..437701f7ed2 100644
--- a/docs/lakehouse/metastores/hive-metastore.md
+++ b/docs/lakehouse/metastores/hive-metastore.md
@@ -1,7 +1,7 @@
---
{
- "title": "Hive Metastore",
- "language": "en"
+ "title": "Hive Metastore",
+ "language": "en"
}
---
@@ -24,5 +24,85 @@ specific language governing permissions and limitations
under the License.
-->
-The document is under development, please refer to versioned doc 2.1 or 3.0
+This document is used to introduce the parameters supported when connecting
and accessing the Hive Metastore through the `CREATE CATALOG` statement.
+## Parameter Overview
+| Property Name | Alias | Description
| Default | Required |
+|--------------------------------------|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|------|
+| `hive.metastore.uris` | | The URI address of the Hive
Metastore. Multiple URIs can be specified, separated by commas. The first URI
is used by default, and if the first URI is unavailable, others will be tried.
For example: `thrift://172.0.0.1:9083` or
`thrift://172.0.0.1:9083,thrift://172.0.0.2:9083` | None | Yes |
+| `hive.conf.resources` | | The location of the hive-site.xml
file, used to load the parameters needed to connect to HMS from the
hive-site.xml file. If the hive-site.xml file contains complete connection
parameter information, only this parameter needs to be filled in. The
configuration file must be placed in the FE deployment directory, with the
default directory being `/plugins/hadoop_conf/` under the deployment directory
(the default path can be changed by modifying `h [...]
+| `hive.metastore.authentication.type` | | The authentication method for the
Hive Metastore. Supports `simple` and `kerberos`. In versions 2.1 and earlier,
the authentication method is determined by the `hadoop.security.authentication`
property. Starting from version 3.0, the authentication method for the Hive
Metastore can be specified separately. | simple | No |
+| `hive.metastore.service.principal` | | When the authentication method is
kerberos, used to specify the principal of the Hive Metastore server.
| Empty | No |
+| `hive.metastore.client.principal` | | When the authentication method is
kerberos, used to specify the principal of the Hive Metastore client. In
versions 2.1 and earlier, this parameter is determined by the
`hadoop.kerberos.principal` property.
| Empty | No |
+| `hive.metastore.client.keytab` | | When the authentication method is
kerberos, used to specify the keytab of the Hive Metastore client. The keytab
file must be placed in the same directory on all FE nodes.
| Empty | No |
+## Authentication Parameters
+In Hive Metastore, there are two authentication methods: simple and kerberos.
+
+### `hive.metastore.authentication.type`
+
+- Description
+ Specifies the authentication method for the Hive Metastore.
+
+- Optional Values
+ - `simple` (default): No authentication is used.
+ - `kerberos`: Enable Kerberos authentication
+
+- Version Differences
+ - Versions 2.1 and earlier: Relies on the global parameter
`hadoop.security.authentication`
+ - Version 3.1+: Can be configured independently
+
+### Enabling Simple Authentication Related Parameters
+Simply specify `hive.metastore.authentication.type = simple`. **Not
recommended for production environments**
+
+#### Complete Example
+```plaintext
+"hive.metastore.authentication.type" = "simple"
+```
+
+### Enabling Kerberos Authentication Related Parameters
+
+#### `hive.metastore.service.principal`
+- Description
+ The Kerberos principal of the Hive Metastore service, used for Doris to
verify the identity of the Metastore.
+
+- Placeholder Support
+ `_HOST` will automatically be replaced with the actual hostname of the
connected Metastore (suitable for multi-node Metastore clusters).
+
+- Example
+ ```plaintext
+ hive/[email protected]
+ hive/[email protected] # Dynamically resolve the actual hostname
+ ```
+
+#### `hive.metastore.client.principal`
+- Description
+ The Kerberos principal used when connecting to the Hive Metastore service.
For example: `doris/[email protected]` or `doris/[email protected]`.
+
+- Placeholder Support
+ `_HOST` will automatically be replaced with the actual hostname of the
connected Metastore (suitable for multi-node Metastore clusters).
+
+- Example
+ ```plaintext
+ doris/[email protected]
+ doris/[email protected] # Dynamically resolve the actual hostname
+ ```
+
+#### `hive.metastore.client.keytab`
+- Description
+ The path to the keytab file containing the key for the specified
principal. The operating system user running all FEs must have permission to
read this file.
+
+- Example
+ ```plaintext
+ "hive.metastore.client.keytab" = "conf/doris.keytab"
+ ```
+
+#### Complete Example
+
+Enable Kerberos authentication
+
+```plaintext
+"hive.metastore.authentication.type" = "kerberos",
+"hive.metastore.service.principal" = "hive/[email protected]",
+"hive.metastore.client.principal" = "doris/[email protected]",
+"hive.metastore.client.keytab" = "etc/doris/conf/doris.keytab"
+```
diff --git a/docs/lakehouse/metastores/iceberg-rest.md
b/docs/lakehouse/metastores/iceberg-rest.md
index 8f702c41f1f..a6d1a696f24 100644
--- a/docs/lakehouse/metastores/iceberg-rest.md
+++ b/docs/lakehouse/metastores/iceberg-rest.md
@@ -24,5 +24,12 @@ specific language governing permissions and limitations
under the License.
-->
-The document is under development, please refer to versioned doc 2.1 or 3.0
+This document is used to introduce the parameters supported when connecting
and accessing the metadata service that supports the Iceberg Rest Catalog
interface through the `CREATE CATALOG` statement.
+
+| Property Name | Former Name | Description
| Default Value | Required |
+| -------------------------- | --- |
------------------------------------------- | ---- | ---------- |
+| `iceberg.rest.uri` | uri | Rest Catalog connection address.
Example: `http://172.21.0.1:8181` | | Yes |
+| `iceberg.rest.security.type` | | Security authentication method for Rest
Catalog. Supports `none` or `oauth2` | `none` | `oauth2` not yet supported |
+| `iceberg.rest.prefix` | |
| | Not yet supported |
+| `iceberg.rest.oauth2.xxx` | | Information related to oauth2
authentication | | Not yet supported |
diff --git a/docs/lakehouse/storages/aliyun-oss.md
b/docs/lakehouse/storages/aliyun-oss.md
index 3d7a44c51d9..feb76997898 100644
--- a/docs/lakehouse/storages/aliyun-oss.md
+++ b/docs/lakehouse/storages/aliyun-oss.md
@@ -1,7 +1,7 @@
---
{
- "title": "Aliyun OSS",
- "language": "en"
+ "title": "Aliyun OSS",
+ "language": "en"
}
---
@@ -24,5 +24,45 @@ specific language governing permissions and limitations
under the License.
-->
-The document is under development, please refer to versioned doc 2.1 or 3.0
+# Aliyun OSS Access Parameters
+
+This document introduces the parameters required to access Aliyun OSS,
applicable to the following scenarios:
+
+- Catalog properties
+- Table Valued Function properties
+- Broker Load properties
+- Export properties
+- Outfile properties
+
+**Doris uses the S3 Client to access Aliyun OSS through the S3 compatible
protocol.**
+
+## Parameter Overview
+| Property Name | Former Name | Description
| Default | Required |
+|-----------------------------------|------------------|------------------------------------------------------------------|---------|----------|
+| `s3.endpoint` | `oss.endpoint` | OSS endpoint,
specifies the access endpoint for Aliyun OSS. Note that the endpoints for OSS
and OSS HDFS are different. | | Yes |
+| `s3.region` | `oss.region` | OSS region, specifies
the region for Aliyun OSS | | No |
+| `s3.access_key` | `oss.access_key` | OSS access key, the
access key for authentication with OSS | | Yes |
+| `s3.secret_key` | `oss.secret_key` | OSS secret key, the
secret key used in conjunction with the access key | | Yes |
+| `s3.connection.maximum` | | Maximum number of S3
connections, specifies the maximum number of connections established with the
OSS service | `50` | No |
+| `s3.connection.request.timeout` | | S3 request timeout,
in milliseconds, specifies the request timeout when connecting to the OSS
service | `3000` | No |
+| `s3.connection.timeout` | | S3 connection
timeout, in milliseconds, specifies the timeout when establishing a connection
with the OSS service | `1000` | No |
+| `s3.sts_endpoint` | | Not yet supported
| | No |
+| `s3.sts_region` | | Not yet supported
| | No |
+| `s3.iam_role` | | Not yet supported
| | No |
+| `s3.external_id` | | Not yet supported
| | No |
+
+### Authentication Configuration
+When accessing Aliyun OSS, you need to provide Aliyun's Access Key and Secret
Key, which are the following parameters:
+
+- `s3.access_key` (or `oss.access_key`)
+- `s3.secret_key` (or `oss.secret_key`)
+
+### Example Configuration
+
+```plaintext
+"oss.access_key" = "ak",
+"oss.secret_key" = "sk",
+"oss.endpoint" = "oss-cn-beijing.aliyuncs.com",
+"oss.region" = "cn-beijing"
+```
diff --git a/docs/lakehouse/storages/hdfs.md b/docs/lakehouse/storages/hdfs.md
index 8393b40d009..0364e58b391 100644
--- a/docs/lakehouse/storages/hdfs.md
+++ b/docs/lakehouse/storages/hdfs.md
@@ -1,7 +1,7 @@
---
{
- "title": "HDFS",
- "language": "en"
+ "title": "HDFS",
+ "language": "en"
}
---
@@ -23,6 +23,89 @@ KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
+# HDFS
+This document is used to introduce the parameters required when accessing
HDFS. These parameters apply to:
+- Catalog properties.
+- Table Valued Function properties.
+- Broker Load properties.
+- Export properties.
+- Outfile properties.
+- Backup and restore
-The document is under development, please refer to versioned doc 2.1 or 3.0
+## Parameter Overview
+| Property Name | Former Name
| Description
| Default Value | Required |
+|------------------------------------------|----------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|----------|
+| `hdfs.authentication.type` | `hadoop.security.authentication`
| Authentication type for accessing HDFS. Supports `kerberos` and `simple`
| `simple` | No |
+| `hdfs.authentication.kerberos.principal` | `hadoop.kerberos.principal`
| Specifies the principal when the authentication type is `kerberos`
| - | No |
+| `hdfs.authentication.kerberos.keytab` | `hadoop.kerberos.keytab`
| Specifies the keytab when the authentication type is `kerberos`
| - | No |
+| `hdfs.impersonation.enabled` | -
| If `true`, HDFS impersonation will be enabled. It will use the proxy user
configured in `core-site.xml` to proxy the Doris login user to perform HDFS
operations
| `Not supported yet` | - |
+| `hadoop.username` | -
| When the authentication type is `simple`, this user will be used to access
HDFS. By default, the Linux system user running the Doris process will be used
| - | - |
+| `hadoop.config.resources` | -
| Specifies the directory of HDFS-related configuration files (must include
`hdfs-site.xml` and `core-site.xml`), must use a relative path, the default
directory is /plugins/hadoop_conf/ under the (FE/BE) deployment directory (can
be changed by modifying hadoop_config_dir in fe.conf/be.conf). All FE and BE
nodes must configure the same relative path. Example:
`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml [...]
+| `dfs.nameservices` | -
| Manually configure parameters for HDFS high availability clusters. If
configured with `hadoop.config.resources`, parameters will be automatically
read from `hdfs-site.xml`. Must be used with the following
parameters:<br>`dfs.ha.namenodes.your-nameservice`<br>`dfs.namenode.rpc-address.your-nameservice.nn1`<br>`dfs.client.failover.proxy.provider`
etc. | - | - |
+
+### Authentication Configuration
+- `hdfs.authentication.type`: Used to specify the authentication type. Options
are `kerberos` or `simple`. If `kerberos` is selected, the system will use
Kerberos authentication to interact with HDFS; if `simple` is used, it means no
authentication is used, suitable for open HDFS clusters. Choosing kerberos
requires configuring the corresponding principal and keytab.
+- `hdfs.authentication.kerberos.principal`: Specifies the Kerberos principal
when the authentication type is `kerberos`. A Kerberos principal is a string
that uniquely identifies an identity, usually including the service name,
hostname, and domain name.
+- `hdfs.authentication.kerberos.keytab`: This parameter specifies the path to
the keytab file used for Kerberos authentication. The keytab file is used to
store encrypted credentials, allowing the system to authenticate automatically
without requiring the user to manually enter a password.
+
+#### Authentication Types
+HDFS supports two authentication methods:
+- Kerberos
+- Simple
+
+##### Simple Authentication
+Simple authentication is suitable for HDFS clusters where Kerberos is not
enabled.
+
+To use Simple authentication, the following parameter needs to be set:
+
+```plaintext
+"hdfs.authentication.type" = "simple"
+```
+
+In Simple authentication mode, the `hadoop.username` parameter can be used to
specify the username. If not specified, the username of the current process
will be used by default.
+
+**Example:**
+
+Access HDFS using the `lakers` username
+```plaintext
+"hdfs.authentication.type" = "simple",
+"hadoop.username" = "lakers"
+```
+
+Access HDFS using the default system user
+```plaintext
+"hdfs.authentication.type" = "simple"
+```
+##### Kerberos Authentication
+Kerberos authentication is suitable for HDFS clusters where Kerberos is
enabled.
+
+To use Kerberos authentication, the following parameters need to be set:
+
+```plaintext
+"hdfs.authentication.type" = "kerberos"
+"hdfs.authentication.kerberos.principal" = "<your_principal>"
+"hdfs.authentication.kerberos.keytab" = "<your_keytab>"
+```
+
+In Kerberos authentication mode, the Kerberos principal and keytab file path
need to be set.
+
+Doris will access HDFS with the identity specified by the
`hdfs.authentication.kerberos.principal` property, using the keytab specified
by the keytab for authentication.
+
+**Note:**
+- The keytab file must exist on every FE and BE node, and the path must be the
same, and the user running the Doris process must have read permission for the
keytab file.
+
+Example:
+```plaintext
+"hdfs.authentication.type" = "kerberos",
+"hdfs.authentication.kerberos.principal" = "hdfs/[email protected]",
+"hdfs.authentication.kerberos.keytab" = "/etc/security/keytabs/hdfs.keytab",
+```
+
+### Configuration Files
+
+Doris supports specifying the directory of HDFS-related configuration files
through the `hadoop.config.resources` parameter.
+
+The configuration file directory must include `hdfs-site.xml` and
`core-site.xml` files, the default directory is `/plugins/hadoop_conf/` under
the (FE/BE) deployment directory. All FE and BE nodes must configure the same
relative path.
+
+If the configuration file contains the parameters mentioned in the document
above, the parameters explicitly configured by the user will be used
preferentially. The configuration file can specify multiple files, separated by
commas. For example, `hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`.
diff --git a/docs/lakehouse/storages/huawei-obs.md
b/docs/lakehouse/storages/huawei-obs.md
index 56856634d95..a6dfbfcb3af 100644
--- a/docs/lakehouse/storages/huawei-obs.md
+++ b/docs/lakehouse/storages/huawei-obs.md
@@ -1,7 +1,7 @@
---
{
- "title": "Huawei OBS",
- "language": "en"
+ "title": "Huawei OBS",
+ "language": "en"
}
---
@@ -24,5 +24,44 @@ specific language governing permissions and limitations
under the License.
-->
-The document is under development, please refer to versioned doc 2.1 or 3.0
+## Huawei Cloud OBS Access Parameters
+
+This document introduces the parameters required to access Huawei Cloud OBS,
applicable to the following scenarios:
+
+- Catalog properties
+- Table Valued Function properties
+- Broker Load properties
+- Export properties
+- Outfile properties
+
+**Doris uses the S3 Client to access Huawei Cloud OBS through the S3
compatible protocol.**
+### Parameter Overview
+
+| Property Name | Former Name | Description
| Default | Required |
+|-----------------------------------|------------------|--------------------------------------------|---------|----------|
+| `s3.endpoint` | `obs.endpoint` | OBS endpoint,
specifies the access endpoint of Huawei Cloud OBS | | Yes |
+| `s3.region` | `obs.region` | OBS region, specifies
the region of Huawei Cloud OBS | | No |
+| `s3.access_key` | `obs.access_key` | OBS access key, the
access key for authentication | | Yes |
+| `s3.secret_key` | `obs.secret_key` | OBS secret key, the
secret key used with the access key | | Yes |
+| `s3.connection.maximum` | | Maximum number of S3
connections, specifies the maximum number of connections established with the
OBS service | `50` | No |
+| `s3.connection.request.timeout` | | S3 request timeout in
milliseconds, specifies the request timeout when connecting to the OBS service
| `3000` | No |
+| `s3.connection.timeout` | | S3 connection timeout
in milliseconds, specifies the timeout when establishing a connection with the
OBS service | `1000` | No |
+
+### Authentication Configuration
+
+When accessing Huawei Cloud OBS, you need to provide Huawei Cloud's Access Key
and Secret Key, which are the following parameters:
+
+- `s3.access_key` (or `obs.access_key`)
+- `s3.secret_key` (or `obs.secret_key`)
+
+These two parameters are used for authentication to ensure access permissions
to Huawei Cloud OBS.
+
+### Configuration Example
+
+```plaintext:
+"s3.access_key" = "ak",
+"s3.secret_key" = "sk",
+"s3.endpoint" = "obs.cn-north-4.myhuaweicloud.com"
+"s3.region" = "cn-north-4"
+```
diff --git a/docs/lakehouse/storages/tencent-cos.md
b/docs/lakehouse/storages/tencent-cos.md
index e0879e7f7cf..dfab7aba931 100644
--- a/docs/lakehouse/storages/tencent-cos.md
+++ b/docs/lakehouse/storages/tencent-cos.md
@@ -1,7 +1,7 @@
---
{
- "title": "Tencent COS",
- "language": "en"
+ "title": "Tencent COS",
+ "language": "en"
}
---
@@ -24,5 +24,47 @@ specific language governing permissions and limitations
under the License.
-->
-The document is under development, please refer to versioned doc 2.1 or 3.0
+## Tencent Cloud COS Access Parameters
+
+This document introduces the parameters required to access Tencent Cloud COS,
applicable to the following scenarios:
+
+- Catalog properties
+- Table Valued Function properties
+- Broker Load properties
+- Export properties
+- Outfile properties
+
+**Doris uses the S3 Client to access Tencent Cloud COS through the S3
compatible protocol.**
+
+## Parameter Overview
+
+| Property Name | Former Name | Description
| Default Value | Required |
+|-----------------------------------|------------------|--------------------------------------------|---------------|----------|
+| `s3.endpoint` | `cos.endpoint` | COS endpoint,
specifies the access endpoint of Tencent Cloud COS | | Yes |
+| `s3.region` | `cos.region` | COS region, specifies
the region of Tencent Cloud COS | | No |
+| `s3.access_key` | `cos.access_key` | COS access key, the
access key for authentication | | Yes |
+| `s3.secret_key` | `cos.secret_key` | COS secret key, the
secret key used with the access key | | Yes |
+| `s3.connection.maximum` | | Maximum S3
connections, specifies the maximum number of connections to the COS service |
`50` | No |
+| `s3.connection.request.timeout` | | S3 request timeout,
in milliseconds, specifies the request timeout when connecting to the COS
service | `3000` | No |
+| `s3.connection.timeout` | | S3 connection
timeout, in milliseconds, specifies the timeout when establishing a connection
to the COS service | `1000` | No |
+| `s3.sts_endpoint` | | Not supported yet
| | No |
+| `s3.sts_region` | | Not supported yet
| | No |
+| `s3.iam_role` | | Not supported yet
| | No |
+| `s3.external_id` | | Not supported yet
| | No |
+
+### Authentication Configuration
+
+When accessing Tencent Cloud COS, you need to provide Tencent Cloud's Access
Key and Secret Key, which are the following parameters:
+
+- `s3.access_key` (or `cos.access_key`)
+- `s3.secret_key` (or `cos.secret_key`)
+
+### Example Configuration
+
+```plaintext
+"cos.access_key" = "ak",
+"cos.secret_key" = "sk",
+"cos.endpoint" = "cos.ap-beijing.myqcloud.com",
+"cos.region" = "ap-beijing"
+```
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hudi-catalog.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hudi-catalog.md
index 5fc3636273c..d770933f36c 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hudi-catalog.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/hudi-catalog.md
@@ -229,6 +229,12 @@ SELECT * from hudi_table@incr('beginTime'='xxx',
['endTime'='xxx'], ['hoodie.rea
| inputSplitNum=1, totalFileSize=13099711, scanRanges=1
```
+## FAQ
+
+1. 通过 JNI 调用 Java SDK 读取 Hudi 增量数据偶发卡死
+
+ 请在 `be.conf` 的 `JAVA_OPTS_FOR_JDK_17` 或 `JAVA_OPTS` 中添加
`-Djol.skipHotspotSAAttach=true`.
+
## 附录
### 版本更新记录
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/maxcompute-catalog.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/maxcompute-catalog.md
index 6e81fe847dd..68f65b07b64 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/maxcompute-catalog.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/catalogs/maxcompute-catalog.md
@@ -78,7 +78,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
| `mc.connect_timeout` | `10s` | 连接 maxcompute 的超时时间
| 2.1.8(含)之后 |
| `mc.read_timeout` | `120s` | 读取 maxcompute 的超时时间
| 2.1.8(含)之后 |
| `mc.retry_count` | `4` | 超时后的重试次数
| 2.1.8(含)之后 |
- | `mc.datetime_predicate_push_down` | `true` | 是否允许下推
`timestamp/timestamp_ntz` 类型的谓词条件。Doris 对这两个类型的同步会丢失精度(9 ->
6)。因此如果原数据精度高于6位,则条件下推可能导致结果不准确。 | 2.1.9(含)之后 |
+ | `mc.datetime_predicate_push_down` | `true` | 是否允许下推
`timestamp/timestamp_ntz` 类型的谓词条件。Doris 对这两个类型的同步会丢失精度(9 ->
6)。因此如果原数据精度高于6位,则条件下推可能导致结果不准确。 | 2.1.9/3.0.5(含)之后 |
* `{CommonProperties}`
@@ -113,6 +113,7 @@ CREATE CATALOG [IF NOT EXISTS] catalog_name PROPERTIES (
| date | date |
|
| datetime | datetime(3) | 固定映射到精度 3。可以通过 `SET [GLOBAL] time_zone =
'Asia/Shanghai'` 来指定时区 |
| timestamp_ntz | datetime(6) | MaxCompute 的 `timestamp_ntz` 精度为 9, Doris
的 DATETIME 最大精度只有 6,故读取数据时会将多的部分直接截断。 |
+| timestamp | datetime(6) | 自 2.1.9/3.0.5 支持。MaxCompute 的 `timestamp` 精度为
9, Doris 的 DATETIME 最大精度只有 6,故读取数据时会将多的部分直接截断。 |
| array | array |
|
| map | map |
|
| struct | struct |
|
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/hive-metastore.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/hive-metastore.md
index 32335f2804d..0e0ef4a308f 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/hive-metastore.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/hive-metastore.md
@@ -26,69 +26,84 @@ under the License.
本文档用于介绍通过 `CREATE CATALOG` 语句连接并访问 Hive Metastore 时所支持的参数。
## 参数总览
-| 属性名称 | 描述
| 默认值 | 是否必须 |
-|--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|------|
-| `hive.metastore.uris` | Hive Metastore 的 URI 地址。支持指定多个
URI,使用逗号分隔。默认使用第一个 URI,当第一个 URI 不可用时,会尝试使用其他的。如:`thrift://172.0.0.1:9083` 或
`thrift://172.0.0.1:9083,thrift://172.0.0.2:9083`
| 无 | 是 |
-| `hive.conf.resources` | hive-site.xml 文件位置,用于从hive-site.xml
文件中加载连接 HMS 所需参数,若hive-site.xml 文件包含完整的链接参数信息,则可仅填写此参数。配置文件必须放在 FE
部署目录,默认目录为部署目录下的 /plugins/hadoop_conf/(可修改fe.conf中的hadoop_config_dir
来更改默认路径),文件位置需要为相对路径,如 hms-1/hive-site.xml。且所有 FE 节点都必须含有此文件。 | 空 | 否 |
-| `hive.metastore.authentication.type` | Hive Metastore 的认证方式。支持 `simple` 和
`kerberos` 两种。在 2.1 及之前版本中,认证方式由`hadoop.security.authentication`属性决定。3.0
版本开始,可以单独指定 Hive Metastore 的认证方式。
| simple | 否 |
-| `hive.metastore.service.principal` | 当认证方式为 kerberos 时,用于指定 Hive Metastore
服务端的 principal。
| 空 | 否 |
-| `hive.metastore.client.principal` | 当认证方式为 kerberos 时,用于指定 Hive Metastore
客户端的 principal。在 2.1 及之前版本中,该参数由`hadoop.kerberos.principal`属性决定。
| 空 | 否 |
-| `hive.metastore.client.keytab` | 当认证方式为 kerberos 时,用于指定 Hive Metastore
客户端的 keytab。keytab 文件必须要放置到所有 FE 节点的相同目录下。
| 空 | 否 |
+| 属性名称 | 曾用名 | 描述
| 默认值 | 是否必须 |
+|--------------------------------------|---|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------|------|
+| `hive.metastore.uris` | | Hive Metastore 的 URI 地址。支持指定多个
URI,使用逗号分隔。默认使用第一个 URI,当第一个 URI 不可用时,会尝试使用其他的。如:`thrift://172.0.0.1:9083` 或
`thrift://172.0.0.1:9083,thrift://172.0.0.2:9083`
| 无 | 是 |
+| `hive.conf.resources` | | hive-site.xml 文件位置,用于从
hive-site.xml 文件中加载连接 HMS 所需参数,若 hive-site.xml 文件包含完整的链接参数信息,则可仅填写此参数。配置文件必须放在
FE 部署目录,默认目录为部署目录下的 `/plugins/hadoop_conf/`(可修改 fe.conf 中的 `hadoop_config_dir`
来更改默认路径),文件位置需要为相对路径,如 hms-1/hive-site.xml。且所有 FE 节点都必须含有此文件。 | 空 | 否 |
+| `hive.metastore.authentication.type` | | Hive Metastore 的认证方式。支持 `simple` 和
`kerberos` 两种。在 2.1 及之前版本中,认证方式由`hadoop.security.authentication`属性决定。3.0
版本开始,可以单独指定 Hive Metastore 的认证方式。
| simple | 否 |
+| `hive.metastore.service.principal` | | 当认证方式为 kerberos 时,用于指定 Hive
Metastore 服务端的 principal。
| 空 | 否 |
+| `hive.metastore.client.principal` | | 当认证方式为 kerberos 时,用于指定 Hive
Metastore 客户端的 principal。在 2.1 及之前版本中,该参数由`hadoop.kerberos.principal`属性决定。
| 空 | 否 |
+| `hive.metastore.client.keytab` | | 当认证方式为 kerberos 时,用于指定 Hive
Metastore 客户端的 keytab。keytab 文件必须要放置到所有 FE 节点的相同目录下。
| 空 | 否 |
## 认证参数
在 Hive Metastore 中,有两种认证方式:simple 和 kerberos。
+
### `hive.metastore.authentication.type`
-- **描述**
- 指定 Hive Metastore 的认证方式。
-- **可选值**
+
+- 描述
+ 指定 Hive Metastore 的认证方式。
+
+- 可选值
- `simple`(默认): 即不使用任何认证。
- `kerberos`: 启用 Kerberos 认证
-- **版本差异**
+
+- 版本差异
- 2.1 及之前版本:依赖全局参数 `hadoop.security.authentication`
- - 3.0+ 版本:可独立配置
+ - 3.1+ 版本:可独立配置
+
### 启用 Simple 认证相关参数
-直接指定 `hive.metastore.authentication.type = simple` 即可。
-**生产环境不建议使用此方式**
+直接指定 `hive.metastore.authentication.type = simple` 即可。**生产环境不建议使用此方式**
+
#### 完整示例
-```properties
-hive.metastore.authentication.type = simple
+```plaintext
+"hive.metastore.authentication.type" = "simple"
```
+
### 启用 Kerberos 认证相关参数
+
#### `hive.metastore.service.principal`
-- **描述**
- Hive Metastore 服务的 Kerberos 主体,用于 Doris 验证 Metastore 身份。
-- **占位符支持**
- `_HOST` 会自动替换为实际连接的 Metastore 主机名(适用于多节点 Metastore 集群)。
-- **示例**
- ```plaintext
- hive/[email protected]
- hive/[email protected] # 动态解析实际主机名
- ```
+- 描述
+ Hive Metastore 服务的 Kerberos 主体,用于 Doris 验证 Metastore 身份。
+
+- 占位符支持
+ `_HOST` 会自动替换为实际连接的 Metastore 主机名(适用于多节点 Metastore 集群)。
+
+- 示例
+ ```plaintext
+ hive/[email protected]
+ hive/[email protected] # 动态解析实际主机名
+ ```
+
#### `hive.metastore.client.principal`
-- **描述**
- 连接到 Hive MeteStore 服务时使用的 Kerberos 主体。
例如:doris/[email protected]或doris/[email protected]。
-- **占位符支持**
- `_HOST` 会自动替换为实际连接的 Metastore 主机名(适用于多节点 Metastore 集群)。
-- **示例**
- ```plaintext
- doris/[email protected]
- doris/[email protected] # 动态解析实际主机名
- ```
+- 描述
+ 连接到 Hive MeteStore 服务时使用的 Kerberos 主体。例如:`doris/[email protected]` 或
`doris/[email protected]`。
+
+- 占位符支持
+ `_HOST` 会自动替换为实际连接的 Metastore 主机名(适用于多节点 Metastore 集群)。
+
+- 示例
+ ```plaintext
+ doris/[email protected]
+ doris/[email protected] # 动态解析实际主机名
+ ```
+
#### `hive.metastore.client.keytab`
-- **描述**
- 包含指定的 principal 的密钥的密钥表文件的路径。运行所有 FE 的操作系统用户必须有权限读取此文件。
-- **示例**
- ```properties
- hive.metastore.client.keytab = conf/doris.keytab
- ```
+- 描述
+ 包含指定的 principal 的密钥的密钥表文件的路径。运行所有 FE 的操作系统用户必须有权限读取此文件。
+- 示例
+ ```plaintext
+ "hive.metastore.client.keytab" = "conf/doris.keytab"
+ ```
#### 完整示例
+
启用 Kerberos 认证
-```properties
-hive.metastore.authentication.type = kerberos
-hive.metastore.service.principal = hive/[email protected]
-hive.metastore.client.principal = doris/[email protected]
-hive.metastore.client.keytab = etc/doris/conf/doris.keytab
+
+```plaintext
+"hive.metastore.authentication.type" = "kerberos",
+"hive.metastore.service.principal" = "hive/[email protected]",
+"hive.metastore.client.principal" = "doris/[email protected]",
+"hive.metastore.client.keytab" = "etc/doris/conf/doris.keytab"
```
-
+
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/iceberg-rest.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/iceberg-rest.md
index f4cf5eb31a7..ad838433fc2 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/iceberg-rest.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/metastores/iceberg-rest.md
@@ -28,8 +28,8 @@ under the License.
| 属性名称 | 曾用名 | 描述
| 默认值 | 是否必须 |
| -------------------------- | --- |
------------------------------------------- | ---- | ---------- |
-| `iceberg.rest.uri` | uri | Rest Catalog
连接地址。示例:http://172.21.0.1:8181 | | 是 |
-| `iceberg.rest.security.type` | | Rest Catalog 的安全认证方式。支持 `none`或`oauth2`
| none | oauth2 尚未支持 |
+| `iceberg.rest.uri` | uri | Rest Catalog
连接地址。示例:`http://172.21.0.1:8181` | | 是 |
+| `iceberg.rest.security.type` | | Rest Catalog 的安全认证方式。支持 `none`或`oauth2`
| `none` | `oauth2` 尚未支持 |
| `iceberg.rest.prefix` | |
| | 尚未支持 |
| `iceberg.rest.oauth2.xxx` | | oauth2 认证相关信息
| | 尚未支持 |
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
index ab333afc420..9da4fa5621e 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/aliyun-oss.md
@@ -24,9 +24,9 @@ specific language governing permissions and limitations
under the License.
-->
-# 阿里云OSS访问参数
+# 阿里云 OSS 访问参数
-本文档介绍访问阿里云OSS所需的参数,这些参数适用于以下场景:
+本文档介绍访问阿里云 OSS 所需的参数,这些参数适用于以下场景:
- Catalog 属性
- Table Valued Function 属性
@@ -59,10 +59,10 @@ under the License.
### 示例配置
-```properties
-"oss.access_key" = "ak"
-"oss.secret_key" = "sk"
-"oss.endpoint" = "oss-cn-beijing.aliyuncs.com"
+```plaintext
+"oss.access_key" = "ak",
+"oss.secret_key" = "sk",
+"oss.endpoint" = "oss-cn-beijing.aliyuncs.com",
"oss.region" = "cn-beijing"
```
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/hdfs.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/hdfs.md
index 2f2f1981d88..f7e504013af 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/hdfs.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/hdfs.md
@@ -35,67 +35,75 @@ under the License.
## 参数总览
| 属性名称 | 曾用名
| 描述
|
默认值 | 是否必须 |
|------------------------------------------|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------|
-| `hdfs.authentication.type` | `hadoop.security.authentication`
| 访问HDFS的认证类型。支持 `kerberos` 和 `simple`
|
`simple` | 否 |
+| `hdfs.authentication.type` | `hadoop.security.authentication`
| 访问 HDFS 的认证类型。支持 `kerberos` 和 `simple`
|
`simple` | 否 |
| `hdfs.authentication.kerberos.principal` | `hadoop.kerberos.principal`
| 当认证类型为 `kerberos` 时,指定 principal
| -
| 否 |
| `hdfs.authentication.kerberos.keytab` | `hadoop.kerberos.keytab`
| 当认证类型为 `kerberos` 时,指定 keytab
| -
| 否 |
-| `hdfs.impersonation.enabled` | -
| 如果为 `true`,将开启HDFS的impersonation功能。会使用 `core-site.xml` 中配置的代理用户,来代理 Doris
的登录用户,执行HDFS操作
| `尚未支持` | - |
-| `hadoop.username` | -
| 当认证类型为 `simple` 时,会使用此用户来访问HDFS。默认情况下,会使用运行 Doris 进程的 Linux 系统用户进行访问
| -
| - |
-| `hadoop.config.resources` | -
| 指定 HDFS 相关配置文件目录(需包含 `hdfs-site.xml` 和
`core-site.xml`),需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改
fe.conf/be.conf 中的hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml` | -
| - |
-| `dfs.nameservices` | -
| 手动配置HDFS高可用集群的参数。若使用 `hadoop.config.resources` 配置,则会自动从 `hdfs-site.xml`
读取参数。需配合以下参数:<br>`dfs.ha.namenodes.your-nameservice`<br>`dfs.namenode.rpc-address.your-nameservice.nn1`<br>`dfs.client.failover.proxy.provider`
等 | - | - |
+| `hdfs.impersonation.enabled` | -
| 如果为 `true`,将开启 HDFS 的 impersonation 功能。会使用 `core-site.xml` 中配置的代理用户,来代理 Doris
的登录用户,执行 HDFS 操作
| `尚未支持` | - |
+| `hadoop.username` | -
| 当认证类型为 `simple` 时,会使用此用户来访问 HDFS。默认情况下,会使用运行 Doris 进程的 Linux 系统用户进行访问
|
- | - |
+| `hadoop.config.resources` | -
| 指定 HDFS 相关配置文件目录(需包含 `hdfs-site.xml` 和
`core-site.xml`),需使用相对路径,默认目录为(FE/BE)部署目录下的 /plugins/hadoop_conf/(可修改
fe.conf/be.conf 中的 hadoop_config_dir 来更改默认路径)。所有 FE 和 BE
节点需配置相同相对路径。示例:`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml` | -
| - |
+| `dfs.nameservices` | -
| 手动配置 HDFS 高可用集群的参数。若使用 `hadoop.config.resources` 配置,则会自动从 `hdfs-site.xml`
读取参数。需配合以下参数:<br>`dfs.ha.namenodes.your-nameservice`<br>`dfs.namenode.rpc-address.your-nameservice.nn1`<br>`dfs.client.failover.proxy.provider`
等 | - | - |
### 认证配置
-- hdfs.authentication.type: 用于指定认证类型。可选值为 kerberos 或 simple。如果选择
kerberos,系统将使用 Kerberos 认证同 HDFS 交互;如果使用 simple ,表示不使用认证,适用于开放的 HDFS 集群。选择
kerberos 需要配置相应的 principal 和 keytab。
-- hdfs.authentication.kerberos.principal: 当认证类型为 kerberos 时,指定 Kerberos 的
principal。Kerberos principal 是一个唯一标识身份的字符串,通常包括服务名、主机名和域名。
-- hdfs.authentication.kerberos.keytab: 该参数指定用于 Kerberos 认证的 keytab 文件路径。keytab
文件用于存储加密的凭证,允许系统自动进行认证,无需用户手动输入密码。
+- `hdfs.authentication.type`: 用于指定认证类型。可选值为 `kerberos` 或 `simple`。如果选择
`kerberos`,系统将使用 Kerberos 认证同 HDFS 交互;如果使用 `simple`,表示不使用认证,适用于开放的 HDFS 集群。选择
kerberos 需要配置相应的 principal 和 keytab。
+- `hdfs.authentication.kerberos.principal`: 当认证类型为 `kerberos` 时,指定 Kerberos 的
principal。Kerberos principal 是一个唯一标识身份的字符串,通常包括服务名、主机名和域名。
+- `hdfs.authentication.kerberos.keytab`: 该参数指定用于 Kerberos 认证的 keytab
文件路径。keytab 文件用于存储加密的凭证,允许系统自动进行认证,无需用户手动输入密码。
+
#### 认证类型
HDFS 支持两种认证方式:即
- Kerberos
- Simple
##### Simple 认证
-Simple 认证适用于未开启 Kerberos 的 HDFS 集群。生产环境不建议使用此方式。
+Simple 认证适用于未开启 Kerberos 的 HDFS 集群。
-开启 Simple 认证方式,需要设置以下参数:
+使用 Simple 认证方式,需要设置以下参数:
```
-hdfs.authentication.type: simple
+"hdfs.authentication.type" = "simple"
```
+
Simple 认证模式下,可以使用 `hadoop.username` 参数来指定用户名。如不指定,则默认使用当前进程运行的用户名。
**示例:**
使用 `lakers` 用户名访问 HDFS
-```properties
-hdfs.authentication.type = simple
-hadoop.username = lakers
+```plaintext
+"hdfs.authentication.type" = "simple",
+"hadoop.username" = "lakers"
```
+
使用默认系统用户访问 HDFS
-```properties
-hdfs.authentication.type = simple
+```plaintext
+"hdfs.authentication.type" = "simple"
```
##### Kerberos 认证
Kerberos 认证适用于已开启 Kerberos 的 HDFS 集群。
-开启 Kerberos 认证方式,需要设置以下参数:
-```properties
-hdfs.authentication.type = kerberos
-hdfs.authentication.kerberos.principal = hdfs/[email protected]
-hdfs.authentication.kerberos.keytab = /etc/security/keytabs/hdfs.keytab
+使用 Kerberos 认证方式,需要设置以下参数:
+
+```plaintext
+"hdfs.authentication.type" = "kerberos"
+"hdfs.authentication.kerberos.principal" = "<your_principal>"
+"hdfs.authentication.kerberos.keytab" = "<your_keytab>"
```
+
Kerberos 认证模式下,需要设置 Kerberos 的 principal 和 keytab 文件路径。
-Doris 将以该 hdfs.authentication.kerberos.principal 属性指定的主体身份访问 HDFS, 使用 keytab
指定的 keytab 对该 Principal 进行认证。
+
+Doris 将以该 `hdfs.authentication.kerberos.principal` 属性指定的主体身份访问 HDFS,使用 keytab
指定的 keytab 对该 Principal 进行认证。
**注意:**
- Keytab 文件需要在每个 FE 和 BE 节点上均存在,且路径相同,同时运行 Doris 进程的用户必须具有该 keytab 文件的读权限。
示例:
-```properties
-hdfs.authentication.type = kerberos
-hdfs.authentication.kerberos.principal = hdfs/[email protected]
-hdfs.authentication.kerberos.keytab = etc/security/keytabs/hdfs.keytab
+```plaintext
+"hdfs.authentication.type" = "kerberos",
+"hdfs.authentication.kerberos.principal" = "hdfs/[email protected]",
+"hdfs.authentication.kerberos.keytab" = "/etc/security/keytabs/hdfs.keytab",
```
### 配置文件
+
Doris 支持通过 `hadoop.config.resources` 参数来指定 HDFS 相关配置文件目录。
+
配置文件目录需包含 `hdfs-site.xml` 和 `core-site.xml` 文件,默认目录为(FE/BE)部署目录下的
`/plugins/hadoop_conf/`。所有 FE 和 BE 节点需配置相同的相对路径。
如果配置文件包含文档上述参数,则优先使用用户显示配置的参数。配置文件可以指定多个文件,多个文件以逗号分隔。如
`hadoop/conf/core-site.xml,hadoop/conf/hdfs-site.xml`。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/huawei-obs.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/huawei-obs.md
index d2577ba9c2e..b75c0dea213 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/huawei-obs.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/huawei-obs.md
@@ -24,9 +24,9 @@ specific language governing permissions and limitations
under the License.
-->
-## 华为云OBS访问参数
+## 华为云 OBS 访问参数
-本文档介绍访问华为云OBS所需的参数,这些参数适用于以下场景:
+本文档介绍访问华为云 OBS 所需的参数,这些参数适用于以下场景:
- Catalog 属性
- Table Valued Function 属性
@@ -45,7 +45,7 @@ under the License.
| `s3.secret_key` | `obs.secret_key` | OBS secret key,与 access
key 配合使用的访问密钥 | | 是 |
| `s3.connection.maximum` | | S3 最大连接数,指定与 OBS
服务建立的最大连接数 | `50` | 否 |
| `s3.connection.request.timeout` | | S3 请求超时时间,单位为毫秒,指定连接
OBS 服务时的请求超时时间 | `3000` | 否 |
-| `s3.connection.timeout` | | S3 连接超时时间,单位为毫秒,指定与 OBS
服务建立连接时的超
+| `s3.connection.timeout` | | S3 连接超时时间,单位为毫秒,指定与 OBS
服务建立连接时的超 | `1000` | 否 |
### 认证配置
@@ -58,9 +58,9 @@ under the License.
### 配置示例
-```properties:
- s3.endpoint: obs.cn-north-4.myhuaweicloud.com
- s3.access_key: AKI******
- s3.secret_key: 5+******
- s3.region: cn-north-4
-```
\ No newline at end of file
+```plaintext:
+"s3.access_key" = "ak",
+"s3.secret_key" = "sk",
+"s3.endpoint" = "obs.cn-north-4.myhuaweicloud.com"
+"s3.region" = "cn-north-4"
+```
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/tencent-cos.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/tencent-cos.md
index dad43a75746..566033d4643 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/tencent-cos.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/lakehouse/storages/tencent-cos.md
@@ -26,7 +26,7 @@ under the License.
## 腾讯云 COS 访问参数
-本文档介绍访问腾讯云COS所需的参数,这些参数适用于以下场景:
+本文档介绍访问腾讯云 COS 所需的参数,这些参数适用于以下场景:
- Catalog 属性
- Table Valued Function 属性
@@ -61,16 +61,10 @@ under the License.
### 示例配置
-```properties
-"cos.access_key"="ak"
-"cos.secret_key"="sk"
-"cos.endpoint"="cos.ap-beijing.myqcloud.com"
-"cos.region"="ap-beijing"
+```plaintext
+"cos.access_key" = "ak",
+"cos.secret_key" = "sk",
+"cos.endpoint" = "cos.ap-beijing.myqcloud.com",
+"cos.region" = "ap-beijing"
```
-```properties
-"s3.access_key"="ak"
-"s3.secret_key"="sk"
-"cos.endpoint"="cos.ap-beijing.myqcloud.com"
-"cos.region"="ap-beijing"
-```
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]