This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 834e6c404e4 Fixed missing sidebar configuration items (#3320)
834e6c404e4 is described below

commit 834e6c404e494d0f1ca24a3643101b5864bae4b2
Author: yangon <[email protected]>
AuthorDate: Tue Feb 10 08:00:24 2026 +0800

    Fixed missing sidebar configuration items (#3320)
---
 .../import/import-way}/log-storage-analysis.md     | 18 ++---
 .../import/import-way}/log-storage-analysis.md     | 87 +++++++++++++++++++++-
 .../import/import-way}/log-storage-analysis.md     |  0
 .../import/import-way}/log-storage-analysis.md     |  4 +-
 .../import/import-way}/log-storage-analysis.md     | 87 +++++++++++++++++++++-
 sidebars.ts                                        | 40 +++++++++-
 .../import/import-way}/log-storage-analysis.md     |  0
 .../import/import-way}/log-storage-analysis.md     | 10 +--
 .../import/import-way}/log-storage-analysis.md     | 18 ++---
 versioned_sidebars/version-2.1-sidebars.json       | 69 ++++++++++++-----
 versioned_sidebars/version-3.x-sidebars.json       | 66 ++++++++++++----
 versioned_sidebars/version-4.x-sidebars.json       | 81 +++++++++++++++-----
 12 files changed, 393 insertions(+), 87 deletions(-)

diff --git a/versioned_docs/version-2.1/log-storage-analysis.md 
b/docs/data-operate/import/import-way/log-storage-analysis.md
similarity index 97%
copy from versioned_docs/version-2.1/log-storage-analysis.md
copy to docs/data-operate/import/import-way/log-storage-analysis.md
index da9022444dc..15bc039d78c 100644
--- a/versioned_docs/version-2.1/log-storage-analysis.md
+++ b/docs/data-operate/import/import-way/log-storage-analysis.md
@@ -1,9 +1,8 @@
 ---
 {
-    "title": "Log Storage and Analysis | Doris Docs",
+    "title": "Log Storage and Analysis",
     "language": "en",
-    "description": "Logs record key events in the system and contain crucial 
information such as the events' subject, time, location, and content.",
-    "sidebar_label": "Log Storage and Analysis"
+    "description": "Logs record key events in the system and contain crucial 
information such as the events' subject, time, location, and content."
 }
 ---
 
@@ -23,7 +22,7 @@ Focused on this solution, this chapter contains the following 
3 sections:
 
 The following figure illustrates the architecture of the log storage and 
analysis platform built on Apache Doris:
 
-![Overall architecture](/images/doris-overall-architecture.png)
+![log storage and analysis platform 
architecture](/images/doris-overall-architecture.png)
 
 The architecture contains the following 3 parts:
 
@@ -160,6 +159,7 @@ Refer to the following table to learn about the values of 
indicators in the exam
 
 After estimating the resources, you need to deploy the cluster. It is 
recommended to deploy in both physical and virtual environments manually. For 
manual deployment, refer to [Manual 
Deployment](./install/deploy-manually/integrated-storage-compute-deploy-manually).
 
+
 ### Step 3: Optimize FE and BE configurations
 
 After completing the cluster deployment, it is necessary to optimize the 
configuration parameters for both the front-end and back-end separately, so as 
to better suit the scenario of log storage and analysis.
@@ -189,7 +189,6 @@ You can find BE configuration fields in `be/conf/be.conf`. 
Refer to the followin
 | -          | `enable_file_cache = true`                                   | 
Enable file caching.                                         |
 | -          | `file_cache_path = [{"path": "/mnt/datadisk0/file_cache", 
"total_size":53687091200, "query_limit": "10737418240"},{"path": 
"/mnt/datadisk1/file_cache", "total_size":53687091200,"query_limit": 
"10737418240"}]` | Configure the cache path and related settings for cold data 
with the following specific configurations:<br/>`path`: cache 
path<br/>`total_size`: total size of the cache path in bytes, where 53687091200 
bytes equals 50 GB<br/>`query_limit`: maximum amount of data tha [...]
 | Write      | `write_buffer_size = 1073741824`                             | 
Increase the file size of the write buffer to reduce small files and random I/O 
operations, improving performance. |
-| -          | `max_tablet_version_num = 20000`                             | 
In coordination with the time_series compaction strategy for table creation, 
allow more versions to remain temporarily unmerged. No longer required after 
version 2.1.11, as there is a time_series_max_tablet_version_num configuration |
 | Compaction | `max_cumu_compaction_threads = 8`                            | 
Set to CPU core count / 4, indicating that 1/4 of CPU resources are used for 
writing, 1/4 for background compaction, and 2/1 for queries and other 
operations. |
 | -          | `inverted_index_compaction_enable = true`                    | 
Enable inverted index compaction to reduce CPU consumption during compaction. |
 | -          | `enable_segcompaction = false` `enable_ordered_data_compaction 
= false` | Disable two compaction features that are unnecessary for log 
scenarios. |
@@ -216,7 +215,7 @@ Due to the distinct characteristics of both writing and 
querying log data, it is
 
 - For data partitioning:
 
-    - Enable [range 
partitioning](./table-design/data-partitioning/manual-partitioning.md#range-partitioning)
 (`PARTITION BY RANGE(`ts`)`) with [dynamic 
partitions](./table-design/data-partitioning/dynamic-partitioning)   
(`"dynamic_partition.enable" = "true"`) managed automatically by day.
+    - Enable [range 
partitioning](./table-design/data-partitioning/manual-partitioning.md#range-partitioning)
 (`PARTITION BY RANGE(`ts`)`) with [dynamic 
partitions](./table-design/data-partitioning/dynamic-partitioning.md) 
(`"dynamic_partition.enable" = "true"`) managed automatically by day.
 
     - Use a field in the DATETIME type as the key (`DUPLICATE KEY(ts)`) for 
accelerated retrieval of the latest N log entries.
 
@@ -326,7 +325,7 @@ Follow these steps:
 ./bin/logstash-plugin install logstash-output-doris-1.2.0.gem
 ```
 
-2. Configure Logstash. Specify the following fields:
+1. Configure Logstash. Specify the following fields:
 
 - `logstash.yml`: Used to configure Logstash batch processing log sizes and 
timings for improved data writing performance.
 
@@ -464,7 +463,7 @@ PROPERTIES (
 "max_batch_size" = "1073741824", 
 "load_to_single_tablet" = "true",
 "format" = "json"
-)  
+)
 FROM KAFKA (  
 "kafka_broker_list" = "host:port",  
 "kafka_topic" = "log__topic_",  
@@ -557,7 +556,7 @@ ORDER BY ts DESC LIMIT 10;
 
 Some third-party vendors offer visual log analysis development platforms based 
on Apache Doris, which include a log search and analysis interface similar to 
Kibana Discover. These platforms provide an intuitive and user-friendly 
exploratory log analysis interaction.
 
-![WebUI](/images/WebUI-EN.jpeg)
+![WebUI-a log search and analysis interface similar to 
Kibana](/images/WebUI-EN.jpeg)
 
 - Support for full-text search and SQL modes
 
@@ -570,4 +569,3 @@ Some third-party vendors offer visual log analysis 
development platforms based o
 - Display of top field values in search results for finding anomalies and 
further drilling down for analysis
 
 Please contact [email protected] to find more.
-
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/log-storage-analysis.md
similarity index 74%
copy from 
i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
copy to 
i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/log-storage-analysis.md
index 0ab89ed4e3f..134b86d7aea 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/data-operate/import/import-way/log-storage-analysis.md
@@ -2,13 +2,95 @@
 {
     "title": "日志存储与分析 | Doris Docs",
     "language": "zh-CN",
-    "description": "在部署集群之前,首先应评估所需服务器硬件资源,包括以下几个关键步骤:",
+    "description": 
"日志是系统运行的详细记录,包含各种事件发生的主体、时间、位置、内容等关键信息。出于运维可观测、网络安全监控及业务分析等多重需求,企业通常需要将分散的日志采集起来,进行集中存储、查询和分析,以进一步从日志数据里挖掘出有价值的内容。",
     "sidebar_label": "日志存储与分析"
 }
 ---
 
 # 日志存储与分析
 
+日志是系统运行的详细记录,包含各种事件发生的主体、时间、位置、内容等关键信息。出于运维可观测、网络安全监控及业务分析等多重需求,企业通常需要将分散的日志采集起来,进行集中存储、查询和分析,以进一步从日志数据里挖掘出有价值的内容。
+
+针对此场景,Apache Doris 提供了相应解决方案,针对日志场景的特点,增加了倒排索引和极速全文检索能力,极致优化写入性能和存储空间,使得用户可以基于 
Apache Doris 构建开放、高性能、低成本、统一的日志存储与分析平台。
+
+本文将围绕这一解决方案,介绍以下内容:
+
+- **整体架构**:说明基于 Apache Doris 构建的日志存储与分析平台的核心组成部分和基础架构。
+- **特点与优势**:说明基于 Apache Doris 构建的日志存储与分析平台的特点和优势。
+- **操作指南**:说明如何基于 Apache Doris 构建日志存储分析平台。
+
+## 整体架构
+
+基于 Apache Doris 构建的日志存储与分析平台的架构如下图:
+
+![Overall architecture](/images/doris-overall-architecture.png)
+
+此架构主要由 3 大部分组成:
+
+- **日志采集和预处理**:多种日志采集工具可以通过 HTTP APIs 将日志数据写入 Apache Doris。
+- **日志存储和分析引擎**:Apache Doris 提供高性能、低成本的统一日志存储,通过 SQL 接口提供丰富的检索分析能力。
+- **日志分析和告警界面**:多种日志检索分析通工具通过标准 SQL 接口查询 Apache Doris,为用户提供简单易用的界面。
+
+## 特点与优势
+
+基于 Apache Doris 构建的日志存储与分析平台的特点和优势如下:
+
+- **高吞吐、低延迟日志写入**:支持每天百 TB 级、GB/s 级日志数据持续稳定写入,同时保持延迟 1s 以内。
+- **海量日志数据低成本存储**:支持 PB 级海量存储,相对于 Elasticsearch 存储成本节省 60% 到 80%,支持冷数据存储到 
S3/HDFS,存储成本再降 50%。
+- **高性能日志全文检索分析**:支持倒排索引和全文检索,日志场景常见查询(关键词检索明细、趋势分析等)秒级响应。
+- **开放、易用的上下游生态**:上游通过 Stream Load 通用 HTTP APIs 对接常见的日志采集系统和数据源 
Logstash、Filebeat、Fluentbit、Kafka 等,下游通过标准 MySQL 协议和语法对接各种可视化分析 UI,比如可观测性 
Grafana、BI 分析 Superset、类 Kibana 的日志检索 Doris WebUI。
+
+### 高性能、低成本
+
+经过 Benchmark 测试及生产验证,基于 Apache Doris 构建的日志存储与分析平台,性价比相对于 Elasticsearch 具有 5~10 
倍的提升。Apache Doris 的性能优势,主要得益于全球领先的高性能存储和查询引擎,以及下面一些针对日志场景的专门优化:
+
+- **写入吞吐提升**:Elasticsearch 写入的性能瓶颈在于解析数据和构建倒排索引的 CPU 消耗。相比之下,Apache Doris 
进行了两方面的写入优化:一方面利用 SIMD 等 CPU 向量化指令提升了 JSON 
数据解析速度和索引构建性能;另一方面针对日志场景简化了倒排索引结构,去掉日志场景不需要的正排等数据结构,有效降低了索引构建的复杂度。同样的资源,Apache 
Doris 的写入性能是 Elasticsearch 的 3~5 倍。
+- **存储成本降低**:Elasticsearch 存储瓶颈在于正排、倒排、Docvalue 列存多份存储和通用压缩算法压缩率较低。相比之下,Apache 
Doris 在存储上进行了以下优化:去掉正排,缩减了 30% 的索引数据量;采用列式存储和 Zstandard 压缩算法,压缩比可达到 5~10 倍,远高于 
Elasticsearch 的 1.5 倍;日志数据中冷数据访问频率很低,Apache Doris 
冷热分层功能可以将超过定义时间段的日志自动存储到更低的对象存储中,冷数据的存储成本可降低 70% 以上。同样的原始数据,Doris 的存储成本只需要 
Elasticsearch 的 20% 左右。
+- **查询性能提升**:Apache Doris 
将全文检索的流程简化,跳过了相关性打分等日志场景不需要的算法,加速基础的检索性能。同时针对日志场景常见的查询,比如查询包含某个关键字的最新 100 
条日志,在查询规划和执行上做专门的 TopN 动态剪枝等优化。
+
+### 分析能力强
+
+Apache Doris 支持标准 SQL、兼容 MySQL 协议和语法,因此基于 Apache Doris 构建的日志系统能够使用 SQL 
进行日志分析,这使得日志系统具备以下优势:
+
+- **简单易用**:工程师和数据分析师对于 SQL 非常熟悉,经验可以复用,不需要学习新的技术栈即可快速上手。
+- **生态丰富**:MySQL 生态是数据库领域使用最广泛的语言,因此可以与 MySQL 生态的集成和应用无缝衔接。Doris 可以利用 MySQL 
命令行与各种 GUI 工具、BI 工具等大数据生态结合,实现更复杂及多样化的数据处理分析需求。
+- **分析能力强**:SQL 语言已经成为数据库和大数据分析的事实标准,它具有强大的表达能力和功能,支持检索、聚合、多表 
JOIN、子查询、UDF、逻辑视图、物化视图等多种数据分析能力。
+
+### Flexible Schema
+
+下面是一个典型的 JSON 
格式半结构化日志样例。顶层字段是一些比较固定的字段,比如日志时间戳(`timestamp`),日志来源(`source`),日志所在机器(`node`),打日志的模块(`component`),日志级别(`level`),客户端请求标识(`clientRequestId`),日志内容(`message`),日志扩展属性(`properties`),基本上每条日志都会有。而扩展属性
 `properties` 的内部嵌套字段 `properties.size`、`properties.format` 
等是比较动态的,每条日志的字段可能不一样。
+
+```JSON  
+{  
+  "timestamp": "2014-03-08T00:50:03.8432810Z",
+  "source": "ADOPTIONCUSTOMERS81",
+  "node": "Engine000000000405",
+  "level": "Information",
+  "component": "DOWNLOADER",
+  "clientRequestId": "671db15d-abad-94f6-dd93-b3a2e6000672",
+  "message": "Downloading file path: 
benchmark/2014/ADOPTIONCUSTOMERS81_94_0.parquet.gz",
+  "properties": {
+    "size": 1495636750,
+    "format": "parquet",
+    "rowCount": 855138,
+    "downloadDuration": "00:01:58.3520561"
+  }
+}
+```
+
+Apache Doris 对 Flexible Schema 的日志数据提供了几个方面的支持:
+
+- 对于顶层字段的少量变化,可以通过 Light Schema Change 发起 ADD / DROP COLUMN 增加 / 删除列,ADD / 
DROP INDEX 增加 / 删除索引,能够在秒级完成 Schema 变更。用户在日志平台规划时只需考虑当前需要哪些字段创建索引。
+- 对于类似 `properties` 的扩展字段,提供了原生半结构化数据类型 `VARIANT`,可以写入任何 JSON 数据,自动识别 JSON 
中的字段名和类型,并自动拆分频繁出现的字段采用列式存储,以便于后续的分析,还可以对 `VARIANT` 创建倒排索引,加快内部字段的查询和检索。
+
+相对于 Elasticsearch 的 Dynamic Mapping,Apache Doris 的 Flexible Schema 有以下优势:
+
+- 允许一个字段有多种类型,`VARIANT` 自动对字段类型做冲突处理和类型提升,更好地适应日志数据的迭代变化。
+- `VARIANT` 自动将不频繁出现的字段合并成一个列存储,可避免字段、元数据、列过多导致性能问题。
+- 不仅可以动态加列,还可以动态删列、动态增加索引、动态删索引,无需像 Elasticsearch 在一开始对所有字段建索引,减少不必要的成本。
+
+## 操作指南
+
 ### 第 1 步:评估资源
 
 在部署集群之前,首先应评估所需服务器硬件资源,包括以下几个关键步骤:
@@ -86,7 +168,6 @@
 | -          | `enable_file_cache = true`                                   | 
开启文件缓存。                                               |
 | -          | `file_cache_path = [{"path": "/mnt/datadisk0/file_cache", 
"total_size":53687091200, "query_limit": "10737418240"},{"path": 
"/mnt/datadisk1/file_cache", "total_size":53687091200,"query_limit": 
"10737418240"}]` | 配置冷数据的缓存路径和相关设置,具体配置说明如下:<br />`path`:缓存路径<br 
/>`total_size`:该缓存路径的总大小,单位为字节,53687091200 字节等于 50 GB<br 
/>`query_limit`:单次查询可以从缓存路径中查询的最大数据量,单位为字节,10737418240 字节等于 10 GB |
 | 写入       | `write_buffer_size = 1073741824`                             | 
增加写入缓冲区(buffer)的文件大小,减少小文件和随机 I/O 操作,提升性能。 |
-| -          | `max_tablet_version_num = 20000`                             | 
配合建表的 time_series compaction 策略,允许更多版本暂时未合并。2.1.11 版本后不再需要,有单独的 
time_series_max_tablet_version_num 配置|
 | Compaction | `max_cumu_compaction_threads = 8`                            | 
设置为 CPU 核数 / 4,意味着 CPU 资源的 1/4 用于写入,1/4 用于后台 Compaction,2/1 留给查询和其他操作。 |
 | -          | `inverted_index_compaction_enable = true`                    | 
开启索引合并(index compaction),减少 Compaction 时的 CPU 消耗。 |
 | -          | `enable_segcompaction = false` `enable_ordered_data_compaction 
= false` | 关闭日志场景不需要的两个 Compaction 功能。                   |
@@ -364,7 +445,7 @@ PROPERTIES (
 "max_batch_size" = "1073741824", 
 "load_to_single_tablet" = "true",
 "format" = "json"
-)  
+)
 FROM KAFKA (  
 "kafka_broker_list" = "host:port",  
 "kafka_topic" = "log__topic_",  
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/log-storage-analysis.md
similarity index 100%
copy from 
i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
copy to 
i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/data-operate/import/import-way/log-storage-analysis.md
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/data-operate/import/import-way/log-storage-analysis.md
similarity index 99%
copy from 
i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
copy to 
i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/data-operate/import/import-way/log-storage-analysis.md
index 0ab89ed4e3f..8b1b05d55de 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.x/data-operate/import/import-way/log-storage-analysis.md
@@ -86,7 +86,7 @@
 | -          | `enable_file_cache = true`                                   | 
开启文件缓存。                                               |
 | -          | `file_cache_path = [{"path": "/mnt/datadisk0/file_cache", 
"total_size":53687091200, "query_limit": "10737418240"},{"path": 
"/mnt/datadisk1/file_cache", "total_size":53687091200,"query_limit": 
"10737418240"}]` | 配置冷数据的缓存路径和相关设置,具体配置说明如下:<br />`path`:缓存路径<br 
/>`total_size`:该缓存路径的总大小,单位为字节,53687091200 字节等于 50 GB<br 
/>`query_limit`:单次查询可以从缓存路径中查询的最大数据量,单位为字节,10737418240 字节等于 10 GB |
 | 写入       | `write_buffer_size = 1073741824`                             | 
增加写入缓冲区(buffer)的文件大小,减少小文件和随机 I/O 操作,提升性能。 |
-| -          | `max_tablet_version_num = 20000`                             | 
配合建表的 time_series compaction 策略,允许更多版本暂时未合并。2.1.11 版本后不再需要,有单独的 
time_series_max_tablet_version_num 配置|
+| -          | `max_tablet_version_num = 20000`                             | 
配合建表的 time_series compaction 策略,允许更多版本暂时未合并。3.0.7 版本后不再需要,有单独的 
time_series_max_tablet_version_num 配置 |
 | Compaction | `max_cumu_compaction_threads = 8`                            | 
设置为 CPU 核数 / 4,意味着 CPU 资源的 1/4 用于写入,1/4 用于后台 Compaction,2/1 留给查询和其他操作。 |
 | -          | `inverted_index_compaction_enable = true`                    | 
开启索引合并(index compaction),减少 Compaction 时的 CPU 消耗。 |
 | -          | `enable_segcompaction = false` `enable_ordered_data_compaction 
= false` | 关闭日志场景不需要的两个 Compaction 功能。                   |
@@ -364,7 +364,7 @@ PROPERTIES (
 "max_batch_size" = "1073741824", 
 "load_to_single_tablet" = "true",
 "format" = "json"
-)  
+)
 FROM KAFKA (  
 "kafka_broker_list" = "host:port",  
 "kafka_topic" = "log__topic_",  
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/data-operate/import/import-way/log-storage-analysis.md
similarity index 74%
rename from 
i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
rename to 
i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/data-operate/import/import-way/log-storage-analysis.md
index 0ab89ed4e3f..134b86d7aea 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/data-operate/import/import-way/log-storage-analysis.md
@@ -2,13 +2,95 @@
 {
     "title": "日志存储与分析 | Doris Docs",
     "language": "zh-CN",
-    "description": "在部署集群之前,首先应评估所需服务器硬件资源,包括以下几个关键步骤:",
+    "description": 
"日志是系统运行的详细记录,包含各种事件发生的主体、时间、位置、内容等关键信息。出于运维可观测、网络安全监控及业务分析等多重需求,企业通常需要将分散的日志采集起来,进行集中存储、查询和分析,以进一步从日志数据里挖掘出有价值的内容。",
     "sidebar_label": "日志存储与分析"
 }
 ---
 
 # 日志存储与分析
 
+日志是系统运行的详细记录,包含各种事件发生的主体、时间、位置、内容等关键信息。出于运维可观测、网络安全监控及业务分析等多重需求,企业通常需要将分散的日志采集起来,进行集中存储、查询和分析,以进一步从日志数据里挖掘出有价值的内容。
+
+针对此场景,Apache Doris 提供了相应解决方案,针对日志场景的特点,增加了倒排索引和极速全文检索能力,极致优化写入性能和存储空间,使得用户可以基于 
Apache Doris 构建开放、高性能、低成本、统一的日志存储与分析平台。
+
+本文将围绕这一解决方案,介绍以下内容:
+
+- **整体架构**:说明基于 Apache Doris 构建的日志存储与分析平台的核心组成部分和基础架构。
+- **特点与优势**:说明基于 Apache Doris 构建的日志存储与分析平台的特点和优势。
+- **操作指南**:说明如何基于 Apache Doris 构建日志存储分析平台。
+
+## 整体架构
+
+基于 Apache Doris 构建的日志存储与分析平台的架构如下图:
+
+![Overall architecture](/images/doris-overall-architecture.png)
+
+此架构主要由 3 大部分组成:
+
+- **日志采集和预处理**:多种日志采集工具可以通过 HTTP APIs 将日志数据写入 Apache Doris。
+- **日志存储和分析引擎**:Apache Doris 提供高性能、低成本的统一日志存储,通过 SQL 接口提供丰富的检索分析能力。
+- **日志分析和告警界面**:多种日志检索分析通工具通过标准 SQL 接口查询 Apache Doris,为用户提供简单易用的界面。
+
+## 特点与优势
+
+基于 Apache Doris 构建的日志存储与分析平台的特点和优势如下:
+
+- **高吞吐、低延迟日志写入**:支持每天百 TB 级、GB/s 级日志数据持续稳定写入,同时保持延迟 1s 以内。
+- **海量日志数据低成本存储**:支持 PB 级海量存储,相对于 Elasticsearch 存储成本节省 60% 到 80%,支持冷数据存储到 
S3/HDFS,存储成本再降 50%。
+- **高性能日志全文检索分析**:支持倒排索引和全文检索,日志场景常见查询(关键词检索明细、趋势分析等)秒级响应。
+- **开放、易用的上下游生态**:上游通过 Stream Load 通用 HTTP APIs 对接常见的日志采集系统和数据源 
Logstash、Filebeat、Fluentbit、Kafka 等,下游通过标准 MySQL 协议和语法对接各种可视化分析 UI,比如可观测性 
Grafana、BI 分析 Superset、类 Kibana 的日志检索 Doris WebUI。
+
+### 高性能、低成本
+
+经过 Benchmark 测试及生产验证,基于 Apache Doris 构建的日志存储与分析平台,性价比相对于 Elasticsearch 具有 5~10 
倍的提升。Apache Doris 的性能优势,主要得益于全球领先的高性能存储和查询引擎,以及下面一些针对日志场景的专门优化:
+
+- **写入吞吐提升**:Elasticsearch 写入的性能瓶颈在于解析数据和构建倒排索引的 CPU 消耗。相比之下,Apache Doris 
进行了两方面的写入优化:一方面利用 SIMD 等 CPU 向量化指令提升了 JSON 
数据解析速度和索引构建性能;另一方面针对日志场景简化了倒排索引结构,去掉日志场景不需要的正排等数据结构,有效降低了索引构建的复杂度。同样的资源,Apache 
Doris 的写入性能是 Elasticsearch 的 3~5 倍。
+- **存储成本降低**:Elasticsearch 存储瓶颈在于正排、倒排、Docvalue 列存多份存储和通用压缩算法压缩率较低。相比之下,Apache 
Doris 在存储上进行了以下优化:去掉正排,缩减了 30% 的索引数据量;采用列式存储和 Zstandard 压缩算法,压缩比可达到 5~10 倍,远高于 
Elasticsearch 的 1.5 倍;日志数据中冷数据访问频率很低,Apache Doris 
冷热分层功能可以将超过定义时间段的日志自动存储到更低的对象存储中,冷数据的存储成本可降低 70% 以上。同样的原始数据,Doris 的存储成本只需要 
Elasticsearch 的 20% 左右。
+- **查询性能提升**:Apache Doris 
将全文检索的流程简化,跳过了相关性打分等日志场景不需要的算法,加速基础的检索性能。同时针对日志场景常见的查询,比如查询包含某个关键字的最新 100 
条日志,在查询规划和执行上做专门的 TopN 动态剪枝等优化。
+
+### 分析能力强
+
+Apache Doris 支持标准 SQL、兼容 MySQL 协议和语法,因此基于 Apache Doris 构建的日志系统能够使用 SQL 
进行日志分析,这使得日志系统具备以下优势:
+
+- **简单易用**:工程师和数据分析师对于 SQL 非常熟悉,经验可以复用,不需要学习新的技术栈即可快速上手。
+- **生态丰富**:MySQL 生态是数据库领域使用最广泛的语言,因此可以与 MySQL 生态的集成和应用无缝衔接。Doris 可以利用 MySQL 
命令行与各种 GUI 工具、BI 工具等大数据生态结合,实现更复杂及多样化的数据处理分析需求。
+- **分析能力强**:SQL 语言已经成为数据库和大数据分析的事实标准,它具有强大的表达能力和功能,支持检索、聚合、多表 
JOIN、子查询、UDF、逻辑视图、物化视图等多种数据分析能力。
+
+### Flexible Schema
+
+下面是一个典型的 JSON 
格式半结构化日志样例。顶层字段是一些比较固定的字段,比如日志时间戳(`timestamp`),日志来源(`source`),日志所在机器(`node`),打日志的模块(`component`),日志级别(`level`),客户端请求标识(`clientRequestId`),日志内容(`message`),日志扩展属性(`properties`),基本上每条日志都会有。而扩展属性
 `properties` 的内部嵌套字段 `properties.size`、`properties.format` 
等是比较动态的,每条日志的字段可能不一样。
+
+```JSON  
+{  
+  "timestamp": "2014-03-08T00:50:03.8432810Z",
+  "source": "ADOPTIONCUSTOMERS81",
+  "node": "Engine000000000405",
+  "level": "Information",
+  "component": "DOWNLOADER",
+  "clientRequestId": "671db15d-abad-94f6-dd93-b3a2e6000672",
+  "message": "Downloading file path: 
benchmark/2014/ADOPTIONCUSTOMERS81_94_0.parquet.gz",
+  "properties": {
+    "size": 1495636750,
+    "format": "parquet",
+    "rowCount": 855138,
+    "downloadDuration": "00:01:58.3520561"
+  }
+}
+```
+
+Apache Doris 对 Flexible Schema 的日志数据提供了几个方面的支持:
+
+- 对于顶层字段的少量变化,可以通过 Light Schema Change 发起 ADD / DROP COLUMN 增加 / 删除列,ADD / 
DROP INDEX 增加 / 删除索引,能够在秒级完成 Schema 变更。用户在日志平台规划时只需考虑当前需要哪些字段创建索引。
+- 对于类似 `properties` 的扩展字段,提供了原生半结构化数据类型 `VARIANT`,可以写入任何 JSON 数据,自动识别 JSON 
中的字段名和类型,并自动拆分频繁出现的字段采用列式存储,以便于后续的分析,还可以对 `VARIANT` 创建倒排索引,加快内部字段的查询和检索。
+
+相对于 Elasticsearch 的 Dynamic Mapping,Apache Doris 的 Flexible Schema 有以下优势:
+
+- 允许一个字段有多种类型,`VARIANT` 自动对字段类型做冲突处理和类型提升,更好地适应日志数据的迭代变化。
+- `VARIANT` 自动将不频繁出现的字段合并成一个列存储,可避免字段、元数据、列过多导致性能问题。
+- 不仅可以动态加列,还可以动态删列、动态增加索引、动态删索引,无需像 Elasticsearch 在一开始对所有字段建索引,减少不必要的成本。
+
+## 操作指南
+
 ### 第 1 步:评估资源
 
 在部署集群之前,首先应评估所需服务器硬件资源,包括以下几个关键步骤:
@@ -86,7 +168,6 @@
 | -          | `enable_file_cache = true`                                   | 
开启文件缓存。                                               |
 | -          | `file_cache_path = [{"path": "/mnt/datadisk0/file_cache", 
"total_size":53687091200, "query_limit": "10737418240"},{"path": 
"/mnt/datadisk1/file_cache", "total_size":53687091200,"query_limit": 
"10737418240"}]` | 配置冷数据的缓存路径和相关设置,具体配置说明如下:<br />`path`:缓存路径<br 
/>`total_size`:该缓存路径的总大小,单位为字节,53687091200 字节等于 50 GB<br 
/>`query_limit`:单次查询可以从缓存路径中查询的最大数据量,单位为字节,10737418240 字节等于 10 GB |
 | 写入       | `write_buffer_size = 1073741824`                             | 
增加写入缓冲区(buffer)的文件大小,减少小文件和随机 I/O 操作,提升性能。 |
-| -          | `max_tablet_version_num = 20000`                             | 
配合建表的 time_series compaction 策略,允许更多版本暂时未合并。2.1.11 版本后不再需要,有单独的 
time_series_max_tablet_version_num 配置|
 | Compaction | `max_cumu_compaction_threads = 8`                            | 
设置为 CPU 核数 / 4,意味着 CPU 资源的 1/4 用于写入,1/4 用于后台 Compaction,2/1 留给查询和其他操作。 |
 | -          | `inverted_index_compaction_enable = true`                    | 
开启索引合并(index compaction),减少 Compaction 时的 CPU 消耗。 |
 | -          | `enable_segcompaction = false` `enable_ordered_data_compaction 
= false` | 关闭日志场景不需要的两个 Compaction 功能。                   |
@@ -364,7 +445,7 @@ PROPERTIES (
 "max_batch_size" = "1073741824", 
 "load_to_single_tablet" = "true",
 "format" = "json"
-)  
+)
 FROM KAFKA (  
 "kafka_broker_list" = "host:port",  
 "kafka_topic" = "log__topic_",  
diff --git a/sidebars.ts b/sidebars.ts
index 2d8f1d5608f..a151a7b51e7 100644
--- a/sidebars.ts
+++ b/sidebars.ts
@@ -59,6 +59,7 @@ const sidebars: SidebarsConfig = {
                                         
'install/deploy-on-kubernetes/integrated-storage-compute/install-doris-cluster',
                                         
'install/deploy-on-kubernetes/integrated-storage-compute/access-cluster',
                                         
'install/deploy-on-kubernetes/integrated-storage-compute/cluster-operation',
+                                        
'install/deploy-on-kubernetes/integrated-storage-compute/helm-chart-deploy',
                                     ],
                                 },
                                 {
@@ -113,6 +114,7 @@ const sidebars: SidebarsConfig = {
                                 
'table-design/data-partitioning/auto-partitioning',
                                 
'table-design/data-partitioning/data-bucketing',
                                 'table-design/data-partitioning/common-issues',
+                                
'table-design/data-partitioning/basic-concepts',
                             ],
                         },
                         'table-design/data-type',
@@ -131,6 +133,8 @@ const sidebars: SidebarsConfig = {
                                 },
                                 'table-design/index/bloomfilter',
                                 'table-design/index/ngram-bloomfilter-index',
+                                'table-design/index/bitmap-index',
+                                'table-design/index/inverted-index',
                             ],
                         },
                         'table-design/schema-change',
@@ -187,6 +191,7 @@ const sidebars: SidebarsConfig = {
                                 
'data-operate/import/import-way/insert-into-manual',
                                 
'data-operate/import/import-way/insert-into-values-manual',
                                 
'data-operate/import/import-way/mysql-load-manual',
+                                
"data-operate/import/import-way/log-storage-analysis"
                             ],
                         },
                         {
@@ -197,6 +202,7 @@ const sidebars: SidebarsConfig = {
                                 'data-operate/import/file-format/json',
                                 'data-operate/import/file-format/parquet',
                                 'data-operate/import/file-format/orc',
+                                'data-operate/import/file-format/native',
                             ],
                         },
                         {
@@ -226,7 +232,13 @@ const sidebars: SidebarsConfig = {
                                 
'data-operate/import/load-internals/stream-load-in-complex-network',
                             ],
                         },
-                        "data-operate/import/streaming-job"
+                        'data-operate/import/streaming-job',
+                        'data-operate/import/cdc-load-manual-sample',
+                        {
+                            type: 'category',
+                            label: 'Scheduler',
+                            items: ['data-operate/scheduler/job-scheduler'],
+                        },
                     ],
                 },
                 {
@@ -652,6 +664,9 @@ const sidebars: SidebarsConfig = {
                                 'admin-manual/auth/integrations/aws-iam-role',
                             ],
                         },
+                        'admin-manual/auth/ldap',
+                        'admin-manual/auth/ranger',
+                        'admin-manual/auth/user-privilege',
                     ],
                 },
             ],
@@ -754,6 +769,7 @@ const sidebars: SidebarsConfig = {
                         'admin-manual/config/fe-config',
                         'admin-manual/config/be-config',
                         'admin-manual/config/user-property',
+                        'admin-manual/config/fe-config-template',
                     ],
                 },
                 {
@@ -965,6 +981,8 @@ const sidebars: SidebarsConfig = {
                         },
                     ],
                 },
+                'admin-manual/plugin-development-manual',
+                'admin-manual/small-file-mgr',
             ],
         },
         {
@@ -1033,6 +1051,7 @@ const sidebars: SidebarsConfig = {
                         'ecosystem/spark-load',
                     ],
                 },
+                'ecosystem/ecosystem-overview',
             ],
         },
         {
@@ -1112,6 +1131,7 @@ const sidebars: SidebarsConfig = {
                                         
'sql-manual/basic-element/sql-data-types/semi-structured/STRUCT',
                                         
'sql-manual/basic-element/sql-data-types/semi-structured/JSON',
                                         
'sql-manual/basic-element/sql-data-types/semi-structured/VARIANT',
+                                        
'sql-manual/basic-element/sql-data-types/semi-structured/semi-structured-overview',
                                     ],
                                 },
                                 {
@@ -1395,6 +1415,9 @@ const sidebars: SidebarsConfig = {
                                         
'sql-manual/sql-functions/scalar-functions/string-functions/url-encode',
                                         
'sql-manual/sql-functions/scalar-functions/string-functions/uuid',
                                         
'sql-manual/sql-functions/scalar-functions/string-functions/xpath-string',
+                                        
'sql-manual/sql-functions/scalar-functions/string-functions/date',
+                                        
'sql-manual/sql-functions/scalar-functions/string-functions/to-iso8601',
+                                        
'sql-manual/sql-functions/scalar-functions/string-functions/uuid-to-int',
                                     ],
                                 },
                                 {
@@ -1508,6 +1531,12 @@ const sidebars: SidebarsConfig = {
                                         
'sql-manual/sql-functions/scalar-functions/date-time-functions/years-add',
                                         
'sql-manual/sql-functions/scalar-functions/date-time-functions/years-diff',
                                         
'sql-manual/sql-functions/scalar-functions/date-time-functions/years-sub',
+                                        
'sql-manual/sql-functions/scalar-functions/date-time-functions/current-timestamp',
+                                        
'sql-manual/sql-functions/scalar-functions/date-time-functions/localtime',
+                                        
'sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-ceil',
+                                        
'sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor',
+                                        
'sql-manual/sql-functions/scalar-functions/date-time-functions/quarters-diff',
+                                        
'sql-manual/sql-functions/scalar-functions/date-time-functions/second-timestamp',
                                     ],
                                 },
                                 {
@@ -1732,6 +1761,10 @@ const sidebars: SidebarsConfig = {
                                         
'sql-manual/sql-functions/scalar-functions/ip-functions/to-ipv6',
                                         
'sql-manual/sql-functions/scalar-functions/ip-functions/to-ipv6-or-default',
                                         
'sql-manual/sql-functions/scalar-functions/ip-functions/to-ipv6-or-null',
+                                        
'sql-manual/sql-functions/scalar-functions/ip-functions/inet-aton',
+                                        
'sql-manual/sql-functions/scalar-functions/ip-functions/inet-ntoa',
+                                        
'sql-manual/sql-functions/scalar-functions/ip-functions/inet6-aton',
+                                        
'sql-manual/sql-functions/scalar-functions/ip-functions/inet6-ntoa',
                                     ],
                                 },
                                 {
@@ -1938,6 +1971,7 @@ const sidebars: SidebarsConfig = {
                                 
'sql-manual/sql-functions/window-functions/percent-rank',
                                 
'sql-manual/sql-functions/window-functions/rank',
                                 
'sql-manual/sql-functions/window-functions/row-number',
+                                
'sql-manual/sql-functions/window-functions/nth-value',
                                 
'sql-manual/sql-functions/aggregate-functions/any-value',
                                 
'sql-manual/sql-functions/aggregate-functions/approx-count-distinct',
                                 
'sql-manual/sql-functions/aggregate-functions/array-agg',
@@ -2017,6 +2051,7 @@ const sidebars: SidebarsConfig = {
                                 
'sql-manual/sql-functions/table-functions/explode-json-array-string',
                                 
'sql-manual/sql-functions/table-functions/explode-json-array-string-outer',
                                 
'sql-manual/sql-functions/table-functions/explode-json-object',
+                                
'sql-manual/sql-functions/table-functions/explode-json-object-outer',
                                 
'sql-manual/sql-functions/table-functions/explode-map',
                                 
'sql-manual/sql-functions/table-functions/explode-map-outer',
                                 
'sql-manual/sql-functions/table-functions/explode-numbers',
@@ -2129,6 +2164,7 @@ const sidebars: SidebarsConfig = {
                                         
'sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-RESTORE',
                                         
'sql-manual/sql-statements/data-modification/backup-and-restore/CANCEL-RESTORE',
                                         
'sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-SNAPSHOT',
+                                        
'sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-BACKUP',
                                     ],
                                 },
                             ],
@@ -2377,6 +2413,7 @@ const sidebars: SidebarsConfig = {
                                 
'sql-manual/sql-statements/statistics/DROP-ANALYZE-JOB',
                                 
'sql-manual/sql-statements/statistics/KILL-ANALYZE-JOB',
                                 
'sql-manual/sql-statements/statistics/SHOW-ANALYZE',
+                                
'sql-manual/sql-statements/statistics/SHOW-QUEUED-ANZLYZE-JOBS',
                             ],
                         },
                         {
@@ -2406,6 +2443,7 @@ const sidebars: SidebarsConfig = {
                                         
'sql-manual/sql-statements/cluster-management/instance-management/ADD-BROKER',
                                         
'sql-manual/sql-statements/cluster-management/instance-management/DROP-BROKER',
                                         
'sql-manual/sql-statements/cluster-management/instance-management/SHOW-BROKER',
+                                        
'sql-manual/sql-statements/cluster-management/instance-management/ALTER-SYSTEM-RENAME-COMPUTE-GROUP',
                                     ],
                                 },
                                 {
diff --git a/versioned_docs/version-2.1/log-storage-analysis.md 
b/versioned_docs/version-2.1/data-operate/import/import-way/log-storage-analysis.md
similarity index 100%
copy from versioned_docs/version-2.1/log-storage-analysis.md
copy to 
versioned_docs/version-2.1/data-operate/import/import-way/log-storage-analysis.md
diff --git a/versioned_docs/version-2.1/log-storage-analysis.md 
b/versioned_docs/version-3.x/data-operate/import/import-way/log-storage-analysis.md
similarity index 99%
copy from versioned_docs/version-2.1/log-storage-analysis.md
copy to 
versioned_docs/version-3.x/data-operate/import/import-way/log-storage-analysis.md
index da9022444dc..1bf83156cdc 100644
--- a/versioned_docs/version-2.1/log-storage-analysis.md
+++ 
b/versioned_docs/version-3.x/data-operate/import/import-way/log-storage-analysis.md
@@ -23,7 +23,7 @@ Focused on this solution, this chapter contains the following 
3 sections:
 
 The following figure illustrates the architecture of the log storage and 
analysis platform built on Apache Doris:
 
-![Overall architecture](/images/doris-overall-architecture.png)
+![Log Storage and Analysis Platform built on Apache 
Doris](/images/doris-overall-architecture.png)
 
 The architecture contains the following 3 parts:
 
@@ -158,7 +158,7 @@ Refer to the following table to learn about the values of 
indicators in the exam
 
 ### Step 2: Deploy the cluster
 
-After estimating the resources, you need to deploy the cluster. It is 
recommended to deploy in both physical and virtual environments manually. For 
manual deployment, refer to [Manual 
Deployment](./install/deploy-manually/integrated-storage-compute-deploy-manually).
+After estimating the resources, you need to deploy the cluster. It is 
recommended to deploy in both physical and virtual environments manually. For 
manual deployment, refer to [Manual 
Deployment](../../docs/install/deploy-manually/integrated-storage-compute-deploy-manually).
 
 ### Step 3: Optimize FE and BE configurations
 
@@ -177,7 +177,7 @@ You can find FE configuration fields in `fe/conf/fe.conf`. 
Refer to the followin
 | `autobucket_min_buckets = 10`                                | Increase the 
minimum number of automatically bucketed buckets from 1 to 10 to avoid 
insufficient buckets when the log volume increases. |
 | `max_backend_heartbeat_failure_tolerance_count = 10`         | In log 
scenarios, the BE server may experience high pressure, leading to short-term 
timeouts, so increase the tolerance count from 1 to 10. |
 
-For more information, refer to [FE 
Configuration](./admin-manual/config/fe-config.md).
+For more information, refer to [FE 
Configuration](./admin-manual/config/fe-config).
 
 **Optimize BE configurations**
 
@@ -189,7 +189,7 @@ You can find BE configuration fields in `be/conf/be.conf`. 
Refer to the followin
 | -          | `enable_file_cache = true`                                   | 
Enable file caching.                                         |
 | -          | `file_cache_path = [{"path": "/mnt/datadisk0/file_cache", 
"total_size":53687091200, "query_limit": "10737418240"},{"path": 
"/mnt/datadisk1/file_cache", "total_size":53687091200,"query_limit": 
"10737418240"}]` | Configure the cache path and related settings for cold data 
with the following specific configurations:<br/>`path`: cache 
path<br/>`total_size`: total size of the cache path in bytes, where 53687091200 
bytes equals 50 GB<br/>`query_limit`: maximum amount of data tha [...]
 | Write      | `write_buffer_size = 1073741824`                             | 
Increase the file size of the write buffer to reduce small files and random I/O 
operations, improving performance. |
-| -          | `max_tablet_version_num = 20000`                             | 
In coordination with the time_series compaction strategy for table creation, 
allow more versions to remain temporarily unmerged. No longer required after 
version 2.1.11, as there is a time_series_max_tablet_version_num configuration |
+| -          | `max_tablet_version_num = 20000`                             | 
In coordination with the time_series compaction strategy for table creation, 
allow more versions to remain temporarily unmerged. No longer required after 
version 3.0.7, as there is a time_series_max_tablet_version_num configuration |
 | Compaction | `max_cumu_compaction_threads = 8`                            | 
Set to CPU core count / 4, indicating that 1/4 of CPU resources are used for 
writing, 1/4 for background compaction, and 2/1 for queries and other 
operations. |
 | -          | `inverted_index_compaction_enable = true`                    | 
Enable inverted index compaction to reduce CPU consumption during compaction. |
 | -          | `enable_segcompaction = false` `enable_ordered_data_compaction 
= false` | Disable two compaction features that are unnecessary for log 
scenarios. |
@@ -464,7 +464,7 @@ PROPERTIES (
 "max_batch_size" = "1073741824", 
 "load_to_single_tablet" = "true",
 "format" = "json"
-)  
+)
 FROM KAFKA (  
 "kafka_broker_list" = "host:port",  
 "kafka_topic" = "log__topic_",  
diff --git a/versioned_docs/version-2.1/log-storage-analysis.md 
b/versioned_docs/version-4.x/data-operate/import/import-way/log-storage-analysis.md
similarity index 97%
rename from versioned_docs/version-2.1/log-storage-analysis.md
rename to 
versioned_docs/version-4.x/data-operate/import/import-way/log-storage-analysis.md
index da9022444dc..15bc039d78c 100644
--- a/versioned_docs/version-2.1/log-storage-analysis.md
+++ 
b/versioned_docs/version-4.x/data-operate/import/import-way/log-storage-analysis.md
@@ -1,9 +1,8 @@
 ---
 {
-    "title": "Log Storage and Analysis | Doris Docs",
+    "title": "Log Storage and Analysis",
     "language": "en",
-    "description": "Logs record key events in the system and contain crucial 
information such as the events' subject, time, location, and content.",
-    "sidebar_label": "Log Storage and Analysis"
+    "description": "Logs record key events in the system and contain crucial 
information such as the events' subject, time, location, and content."
 }
 ---
 
@@ -23,7 +22,7 @@ Focused on this solution, this chapter contains the following 
3 sections:
 
 The following figure illustrates the architecture of the log storage and 
analysis platform built on Apache Doris:
 
-![Overall architecture](/images/doris-overall-architecture.png)
+![log storage and analysis platform 
architecture](/images/doris-overall-architecture.png)
 
 The architecture contains the following 3 parts:
 
@@ -160,6 +159,7 @@ Refer to the following table to learn about the values of 
indicators in the exam
 
 After estimating the resources, you need to deploy the cluster. It is 
recommended to deploy in both physical and virtual environments manually. For 
manual deployment, refer to [Manual 
Deployment](./install/deploy-manually/integrated-storage-compute-deploy-manually).
 
+
 ### Step 3: Optimize FE and BE configurations
 
 After completing the cluster deployment, it is necessary to optimize the 
configuration parameters for both the front-end and back-end separately, so as 
to better suit the scenario of log storage and analysis.
@@ -189,7 +189,6 @@ You can find BE configuration fields in `be/conf/be.conf`. 
Refer to the followin
 | -          | `enable_file_cache = true`                                   | 
Enable file caching.                                         |
 | -          | `file_cache_path = [{"path": "/mnt/datadisk0/file_cache", 
"total_size":53687091200, "query_limit": "10737418240"},{"path": 
"/mnt/datadisk1/file_cache", "total_size":53687091200,"query_limit": 
"10737418240"}]` | Configure the cache path and related settings for cold data 
with the following specific configurations:<br/>`path`: cache 
path<br/>`total_size`: total size of the cache path in bytes, where 53687091200 
bytes equals 50 GB<br/>`query_limit`: maximum amount of data tha [...]
 | Write      | `write_buffer_size = 1073741824`                             | 
Increase the file size of the write buffer to reduce small files and random I/O 
operations, improving performance. |
-| -          | `max_tablet_version_num = 20000`                             | 
In coordination with the time_series compaction strategy for table creation, 
allow more versions to remain temporarily unmerged. No longer required after 
version 2.1.11, as there is a time_series_max_tablet_version_num configuration |
 | Compaction | `max_cumu_compaction_threads = 8`                            | 
Set to CPU core count / 4, indicating that 1/4 of CPU resources are used for 
writing, 1/4 for background compaction, and 2/1 for queries and other 
operations. |
 | -          | `inverted_index_compaction_enable = true`                    | 
Enable inverted index compaction to reduce CPU consumption during compaction. |
 | -          | `enable_segcompaction = false` `enable_ordered_data_compaction 
= false` | Disable two compaction features that are unnecessary for log 
scenarios. |
@@ -216,7 +215,7 @@ Due to the distinct characteristics of both writing and 
querying log data, it is
 
 - For data partitioning:
 
-    - Enable [range 
partitioning](./table-design/data-partitioning/manual-partitioning.md#range-partitioning)
 (`PARTITION BY RANGE(`ts`)`) with [dynamic 
partitions](./table-design/data-partitioning/dynamic-partitioning)   
(`"dynamic_partition.enable" = "true"`) managed automatically by day.
+    - Enable [range 
partitioning](./table-design/data-partitioning/manual-partitioning.md#range-partitioning)
 (`PARTITION BY RANGE(`ts`)`) with [dynamic 
partitions](./table-design/data-partitioning/dynamic-partitioning.md) 
(`"dynamic_partition.enable" = "true"`) managed automatically by day.
 
     - Use a field in the DATETIME type as the key (`DUPLICATE KEY(ts)`) for 
accelerated retrieval of the latest N log entries.
 
@@ -326,7 +325,7 @@ Follow these steps:
 ./bin/logstash-plugin install logstash-output-doris-1.2.0.gem
 ```
 
-2. Configure Logstash. Specify the following fields:
+1. Configure Logstash. Specify the following fields:
 
 - `logstash.yml`: Used to configure Logstash batch processing log sizes and 
timings for improved data writing performance.
 
@@ -464,7 +463,7 @@ PROPERTIES (
 "max_batch_size" = "1073741824", 
 "load_to_single_tablet" = "true",
 "format" = "json"
-)  
+)
 FROM KAFKA (  
 "kafka_broker_list" = "host:port",  
 "kafka_topic" = "log__topic_",  
@@ -557,7 +556,7 @@ ORDER BY ts DESC LIMIT 10;
 
 Some third-party vendors offer visual log analysis development platforms based 
on Apache Doris, which include a log search and analysis interface similar to 
Kibana Discover. These platforms provide an intuitive and user-friendly 
exploratory log analysis interaction.
 
-![WebUI](/images/WebUI-EN.jpeg)
+![WebUI-a log search and analysis interface similar to 
Kibana](/images/WebUI-EN.jpeg)
 
 - Support for full-text search and SQL modes
 
@@ -570,4 +569,3 @@ Some third-party vendors offer visual log analysis 
development platforms based o
 - Display of top field values in search results for finding anomalies and 
further drilling down for analysis
 
 Please contact [email protected] to find more.
-
diff --git a/versioned_sidebars/version-2.1-sidebars.json 
b/versioned_sidebars/version-2.1-sidebars.json
index be670f5f994..7e377745091 100644
--- a/versioned_sidebars/version-2.1-sidebars.json
+++ b/versioned_sidebars/version-2.1-sidebars.json
@@ -45,7 +45,8 @@
                                 
"install/deploy-on-kubernetes/install-config-cluster",
                                 
"install/deploy-on-kubernetes/install-doris-cluster",
                                 "install/deploy-on-kubernetes/access-cluster",
-                                
"install/deploy-on-kubernetes/cluster-operation"
+                                
"install/deploy-on-kubernetes/cluster-operation",
+                                
"install/deploy-on-kubernetes/helm-chart-deploy"
                             ]
                         },
                         {
@@ -90,7 +91,10 @@
                                 
"table-design/data-partitioning/dynamic-partitioning",
                                 
"table-design/data-partitioning/auto-partitioning",
                                 
"table-design/data-partitioning/data-bucketing",
-                                "table-design/data-partitioning/common-issues"
+                                "table-design/data-partitioning/common-issues",
+                                "table-design/data-partitioning/auto-bucket",
+                                
"table-design/data-partitioning/basic-concepts",
+                                
"table-design/data-partitioning/manual-bucketing"
                             ]
                         },
                         "table-design/data-type",
@@ -103,7 +107,8 @@
                                 "table-design/index/prefix-index",
                                 "table-design/index/inverted-index",
                                 "table-design/index/bloomfilter",
-                                "table-design/index/ngram-bloomfilter-index"
+                                "table-design/index/ngram-bloomfilter-index",
+                                "table-design/index/bitmap-index"
                             ]
                         },
                         "table-design/schema-change",
@@ -158,7 +163,8 @@
                                 
"data-operate/import/import-way/routine-load-manual",
                                 
"data-operate/import/import-way/insert-into-manual",
                                 
"data-operate/import/import-way/insert-into-values-manual",
-                                
"data-operate/import/import-way/mysql-load-manual"
+                                
"data-operate/import/import-way/mysql-load-manual",
+                                
"data-operate/import/import-way/log-storage-analysis"
                             ]
                         },
                         {
@@ -188,7 +194,15 @@
                         "data-operate/import/load-data-convert",
                         "data-operate/import/load-high-availability",
                         "data-operate/import/group-commit-manual",
-                        "data-operate/import/load-best-practices"
+                        "data-operate/import/load-best-practices",
+                        "data-operate/import/cdc-load-manual-sample",
+                        {
+                            "type": "category",
+                            "label": "Scheduler",
+                            "items": [
+                                "data-operate/scheduler/job-scheduler"
+                            ]
+                        }
                     ]
                 },
                 {
@@ -294,7 +308,9 @@
                             "label": "Distincting Counts",
                             "items": [
                                 
"query-acceleration/distinct-counts/bitmap-precise-deduplication",
-                                
"query-acceleration/distinct-counts/hll-approximate-deduplication"
+                                
"query-acceleration/distinct-counts/hll-approximate-deduplication",
+                                
"query-acceleration/distinct-counts/orthogonal-bitmap-manual",
+                                "query-acceleration/distinct-counts/using-hll"
                             ]
                         },
                         "query-acceleration/colocation-join",
@@ -520,7 +536,10 @@
                                 },
                                 "admin-manual/auth/encryption-function"
                             ]
-                        }
+                        },
+                        "admin-manual/auth/ldap",
+                        "admin-manual/auth/ranger",
+                        "admin-manual/auth/user-privilege"
                     ]
                 }
             ]
@@ -626,7 +645,8 @@
                         "admin-manual/config/config-dir",
                         "admin-manual/config/fe-config",
                         "admin-manual/config/be-config",
-                        "admin-manual/config/user-property"
+                        "admin-manual/config/user-property",
+                        "admin-manual/config/fe-config-template"
                     ]
                 },
                 {
@@ -903,7 +923,8 @@
                         "ecosystem/automq-load",
                         "ecosystem/hive-bitmap-udf"
                     ]
-                }
+                },
+                "ecosystem/ecosystem-overview"
             ]
         },
         {
@@ -975,7 +996,8 @@
                                         
"sql-manual/basic-element/sql-data-types/semi-structured/MAP",
                                         
"sql-manual/basic-element/sql-data-types/semi-structured/STRUCT",
                                         
"sql-manual/basic-element/sql-data-types/semi-structured/JSON",
-                                        
"sql-manual/basic-element/sql-data-types/semi-structured/VARIANT"
+                                        
"sql-manual/basic-element/sql-data-types/semi-structured/VARIANT",
+                                        
"sql-manual/basic-element/sql-data-types/semi-structured/semi-structured-overview"
                                     ]
                                 },
                                 {
@@ -1088,7 +1110,8 @@
                                         
"sql-manual/sql-functions/scalar-functions/numeric-functions/mod",
                                         
"sql-manual/sql-functions/scalar-functions/numeric-functions/normal-cdf",
                                         
"sql-manual/sql-functions/scalar-functions/numeric-functions/uuid_numeric",
-                                        
"sql-manual/sql-functions/scalar-functions/numeric-functions/width-bucket"
+                                        
"sql-manual/sql-functions/scalar-functions/numeric-functions/width-bucket",
+                                        
"sql-manual/sql-functions/scalar-functions/numeric-functions/xor"
                                     ]
                                 },
                                 {
@@ -1166,7 +1189,12 @@
                                         
"sql-manual/sql-functions/scalar-functions/string-functions/unhex",
                                         
"sql-manual/sql-functions/scalar-functions/string-functions/url-decode",
                                         
"sql-manual/sql-functions/scalar-functions/string-functions/url-encode",
-                                        
"sql-manual/sql-functions/scalar-functions/string-functions/uuid"
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/uuid",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/date",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/from-iso8601-date",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/to-iso8601",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/week-ceil",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/week-floor"
                                     ]
                                 },
                                 {
@@ -1196,7 +1224,6 @@
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-microsecond",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-millisecond",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-second",
-                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-second",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-unixtime",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/hour",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/hour-ceil",
@@ -1260,10 +1287,12 @@
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/year",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/year-ceil",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/year-floor",
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/yearweek",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-add",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-diff",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-sub",
-                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/yearweek"
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/current-timestamp",
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/second-timestamp"
                                     ]
                                 },
                                 {
@@ -1669,7 +1698,8 @@
                                 
"sql-manual/sql-functions/table-functions/explode-json-object",
                                 
"sql-manual/sql-functions/table-functions/explode-map",
                                 
"sql-manual/sql-functions/table-functions/explode-numbers",
-                                
"sql-manual/sql-functions/table-functions/explode-split"
+                                
"sql-manual/sql-functions/table-functions/explode-split",
+                                
"sql-manual/sql-functions/table-functions/posexplode"
                             ]
                         },
                         {
@@ -1681,6 +1711,7 @@
                                 
"sql-manual/sql-functions/table-valued-functions/frontends",
                                 
"sql-manual/sql-functions/table-valued-functions/frontends_disks",
                                 
"sql-manual/sql-functions/table-valued-functions/hdfs",
+                                
"sql-manual/sql-functions/table-valued-functions/hudi-meta",
                                 
"sql-manual/sql-functions/table-valued-functions/iceberg-meta",
                                 
"sql-manual/sql-functions/table-valued-functions/jobs",
                                 
"sql-manual/sql-functions/table-valued-functions/local",
@@ -1768,7 +1799,8 @@
                                         
"sql-manual/sql-statements/data-modification/backup-and-restore/RESTORE",
                                         
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-RESTORE",
                                         
"sql-manual/sql-statements/data-modification/backup-and-restore/CANCEL-RESTORE",
-                                        
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-SNAPSHOT"
+                                        
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-SNAPSHOT",
+                                        
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-BACKUP"
                                     ]
                                 }
                             ]
@@ -2067,7 +2099,8 @@
                                         
"sql-manual/sql-statements/cluster-management/storage-management/CREATE-STORAGE-POLICY",
                                         
"sql-manual/sql-statements/cluster-management/storage-management/ALTER-STORAGE-POLICY",
                                         
"sql-manual/sql-statements/cluster-management/storage-management/DROP-STORAGE-POLICY",
-                                        
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY"
+                                        
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY",
+                                        
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING"
                                     ]
                                 }
                             ]
@@ -2235,4 +2268,4 @@
             ]
         }
     ]
-}
+}
\ No newline at end of file
diff --git a/versioned_sidebars/version-3.x-sidebars.json 
b/versioned_sidebars/version-3.x-sidebars.json
index e364dc76140..eff9926f3de 100644
--- a/versioned_sidebars/version-3.x-sidebars.json
+++ b/versioned_sidebars/version-3.x-sidebars.json
@@ -56,7 +56,8 @@
                                         
"install/deploy-on-kubernetes/integrated-storage-compute/install-config-cluster",
                                         
"install/deploy-on-kubernetes/integrated-storage-compute/install-doris-cluster",
                                         
"install/deploy-on-kubernetes/integrated-storage-compute/access-cluster",
-                                        
"install/deploy-on-kubernetes/integrated-storage-compute/cluster-operation"
+                                        
"install/deploy-on-kubernetes/integrated-storage-compute/cluster-operation",
+                                        
"install/deploy-on-kubernetes/integrated-storage-compute/helm-chart-deploy"
                                     ]
                                 },
                                 {
@@ -115,7 +116,10 @@
                                 
"table-design/data-partitioning/dynamic-partitioning",
                                 
"table-design/data-partitioning/auto-partitioning",
                                 
"table-design/data-partitioning/data-bucketing",
-                                "table-design/data-partitioning/common-issues"
+                                "table-design/data-partitioning/common-issues",
+                                "table-design/data-partitioning/auto-bucket",
+                                
"table-design/data-partitioning/basic-concepts",
+                                
"table-design/data-partitioning/manual-bucketing"
                             ]
                         },
                         "table-design/data-type",
@@ -128,7 +132,8 @@
                                 "table-design/index/prefix-index",
                                 "table-design/index/inverted-index",
                                 "table-design/index/bloomfilter",
-                                "table-design/index/ngram-bloomfilter-index"
+                                "table-design/index/ngram-bloomfilter-index",
+                                "table-design/index/bitmap-index"
                             ]
                         },
                         "table-design/schema-change",
@@ -183,7 +188,8 @@
                                 
"data-operate/import/import-way/routine-load-manual",
                                 
"data-operate/import/import-way/insert-into-manual",
                                 
"data-operate/import/import-way/insert-into-values-manual",
-                                
"data-operate/import/import-way/mysql-load-manual"
+                                
"data-operate/import/import-way/mysql-load-manual",
+                                
"data-operate/import/import-way/log-storage-analysis"
                             ]
                         },
                         {
@@ -220,6 +226,14 @@
                             "items": [
                                 
"data-operate/import/load-internals/stream-load-in-complex-network"
                             ]
+                        },
+                        "data-operate/import/cdc-load-manual-sample",
+                        {
+                            "type": "category",
+                            "label": "Scheduler",
+                            "items": [
+                                "data-operate/scheduler/job-scheduler"
+                            ]
                         }
                     ]
                 },
@@ -414,7 +428,8 @@
                                 "lakehouse/catalogs/jdbc-ibmdb2-catalog",
                                 "lakehouse/catalogs/jdbc-clickhouse-catalog",
                                 "lakehouse/catalogs/jdbc-saphana-catalog",
-                                "lakehouse/catalogs/jdbc-oceanbase-catalog"
+                                "lakehouse/catalogs/jdbc-oceanbase-catalog",
+                                "lakehouse/catalogs/lakesoul-catalog"
                             ]
                         },
                         "lakehouse/file-analysis",
@@ -590,7 +605,10 @@
                                 
"admin-manual/auth/integrations/aws-authentication-and-authorization",
                                 "admin-manual/auth/integrations/aws-iam-role"
                             ]
-                        }
+                        },
+                        "admin-manual/auth/ldap",
+                        "admin-manual/auth/ranger",
+                        "admin-manual/auth/user-privilege"
                     ]
                 }
             ]
@@ -697,7 +715,8 @@
                         "admin-manual/config/config-dir",
                         "admin-manual/config/fe-config",
                         "admin-manual/config/be-config",
-                        "admin-manual/config/user-property"
+                        "admin-manual/config/user-property",
+                        "admin-manual/config/fe-config-template"
                     ]
                 },
                 {
@@ -977,7 +996,8 @@
                         "ecosystem/hive-bitmap-udf",
                         "ecosystem/hive-hll-udf"
                     ]
-                }
+                },
+                "ecosystem/ecosystem-overview"
             ]
         },
         {
@@ -1049,7 +1069,8 @@
                                         
"sql-manual/basic-element/sql-data-types/semi-structured/MAP",
                                         
"sql-manual/basic-element/sql-data-types/semi-structured/STRUCT",
                                         
"sql-manual/basic-element/sql-data-types/semi-structured/JSON",
-                                        
"sql-manual/basic-element/sql-data-types/semi-structured/VARIANT"
+                                
"sql-manual/basic-element/sql-data-types/semi-structured/VARIANT",
+                                
"sql-manual/basic-element/sql-data-types/semi-structured/semi-structured-overview"
                                     ]
                                 },
                                 {
@@ -1245,7 +1266,13 @@
                                         
"sql-manual/sql-functions/scalar-functions/string-functions/url-decode",
                                         
"sql-manual/sql-functions/scalar-functions/string-functions/url-encode",
                                         
"sql-manual/sql-functions/scalar-functions/string-functions/uuid",
-                                        
"sql-manual/sql-functions/scalar-functions/string-functions/xpath-string"
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/xpath-string",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/date",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/from-iso8601-date",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/regexp-count",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/to-iso8601",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/week-ceil",
+                                        
"sql-manual/sql-functions/scalar-functions/string-functions/week-floor"
                                     ]
                                 },
                                 {
@@ -1275,7 +1302,6 @@
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-microsecond",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-millisecond",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-second",
-                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-second",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/from-unixtime",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/hour",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/hour-ceil",
@@ -1343,10 +1369,16 @@
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/year",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/year-ceil",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/year-floor",
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/yearweek",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-add",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-diff",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-sub",
-                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/yearweek"
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/current-timestamp",
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/localtime",
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-ceil",
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor",
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/quarters-diff",
+                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/second-timestamp"
                                     ]
                                 },
                                 {
@@ -1769,7 +1801,8 @@
                                 
"sql-manual/sql-functions/table-functions/explode-json-object",
                                 
"sql-manual/sql-functions/table-functions/explode-map",
                                 
"sql-manual/sql-functions/table-functions/explode-numbers",
-                                
"sql-manual/sql-functions/table-functions/explode-split"
+                                
"sql-manual/sql-functions/table-functions/explode-split",
+                                
"sql-manual/sql-functions/table-functions/posexplode"
                             ]
                         },
                         {
@@ -1781,6 +1814,7 @@
                                 
"sql-manual/sql-functions/table-valued-functions/frontends",
                                 
"sql-manual/sql-functions/table-valued-functions/frontends_disks",
                                 
"sql-manual/sql-functions/table-valued-functions/hdfs",
+                                
"sql-manual/sql-functions/table-valued-functions/hudi-meta",
                                 
"sql-manual/sql-functions/table-valued-functions/iceberg-meta",
                                 
"sql-manual/sql-functions/table-valued-functions/jobs",
                                 
"sql-manual/sql-functions/table-valued-functions/local",
@@ -1868,7 +1902,8 @@
                                         
"sql-manual/sql-statements/data-modification/backup-and-restore/RESTORE",
                                         
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-RESTORE",
                                         
"sql-manual/sql-statements/data-modification/backup-and-restore/CANCEL-RESTORE",
-                                        
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-SNAPSHOT"
+                                        
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-SNAPSHOT",
+                                        
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-BACKUP"
                                     ]
                                 }
                             ]
@@ -2183,7 +2218,8 @@
                                         
"sql-manual/sql-statements/cluster-management/storage-management/WARM-UP",
                                         
"sql-manual/sql-statements/cluster-management/storage-management/CANCEL-WARM-UP",
                                         
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB",
-                                        
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT"
+                                        
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT",
+                                        
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING"
                                     ]
                                 }
                             ]
diff --git a/versioned_sidebars/version-4.x-sidebars.json 
b/versioned_sidebars/version-4.x-sidebars.json
index 86d32bc61d0..d0b15643f82 100644
--- a/versioned_sidebars/version-4.x-sidebars.json
+++ b/versioned_sidebars/version-4.x-sidebars.json
@@ -56,7 +56,8 @@
                                         
"install/deploy-on-kubernetes/integrated-storage-compute/install-config-cluster",
                                         
"install/deploy-on-kubernetes/integrated-storage-compute/install-doris-cluster",
                                         
"install/deploy-on-kubernetes/integrated-storage-compute/access-cluster",
-                                        
"install/deploy-on-kubernetes/integrated-storage-compute/cluster-operation"
+                                        
"install/deploy-on-kubernetes/integrated-storage-compute/cluster-operation",
+                                        
"install/deploy-on-kubernetes/integrated-storage-compute/helm-chart-deploy"
                                     ]
                                 },
                                 {
@@ -115,7 +116,8 @@
                                 
"table-design/data-partitioning/dynamic-partitioning",
                                 
"table-design/data-partitioning/auto-partitioning",
                                 
"table-design/data-partitioning/data-bucketing",
-                                "table-design/data-partitioning/common-issues"
+                                "table-design/data-partitioning/common-issues",
+                                "table-design/data-partitioning/basic-concepts"
                             ]
                         },
                         "table-design/data-type",
@@ -135,7 +137,9 @@
                                     ]
                                 },
                                 "table-design/index/bloomfilter",
-                                "table-design/index/ngram-bloomfilter-index"
+                                "table-design/index/ngram-bloomfilter-index",
+                                "table-design/index/bitmap-index",
+                                "table-design/index/inverted-index"
                             ]
                         },
                         "table-design/schema-change",
@@ -191,7 +195,8 @@
                                 
"data-operate/import/import-way/routine-load-manual",
                                 
"data-operate/import/import-way/insert-into-manual",
                                 
"data-operate/import/import-way/insert-into-values-manual",
-                                
"data-operate/import/import-way/mysql-load-manual"
+                                
"data-operate/import/import-way/mysql-load-manual",
+                                
"data-operate/import/import-way/log-storage-analysis"
                             ]
                         },
                         {
@@ -231,7 +236,15 @@
                                 
"data-operate/import/load-internals/stream-load-in-complex-network"
                             ]
                         },
-                        "data-operate/import/streaming-job"
+                        "data-operate/import/streaming-job",
+                        "data-operate/import/cdc-load-manual-sample",
+                        {
+                            "type": "category",
+                            "label": "Scheduler",
+                            "items": [
+                                "data-operate/scheduler/job-scheduler"
+                            ]
+                        }
                     ]
                 },
                 {
@@ -644,7 +657,10 @@
                                 
"admin-manual/auth/integrations/aws-authentication-and-authorization",
                                 "admin-manual/auth/integrations/aws-iam-role"
                             ]
-                        }
+                        },
+                        "admin-manual/auth/ldap",
+                        "admin-manual/auth/ranger",
+                        "admin-manual/auth/user-privilege"
                     ]
                 }
             ]
@@ -753,7 +769,8 @@
                         "admin-manual/config/config-dir",
                         "admin-manual/config/fe-config",
                         "admin-manual/config/be-config",
-                        "admin-manual/config/user-property"
+                        "admin-manual/config/user-property",
+                        "admin-manual/config/fe-config-template"
                     ]
                 },
                 {
@@ -964,7 +981,9 @@
                             ]
                         }
                     ]
-                }
+                },
+                "admin-manual/plugin-development-manual",
+                "admin-manual/small-file-mgr"
             ]
         },
         {
@@ -1036,7 +1055,8 @@
                         "ecosystem/hive-hll-udf",
                         "ecosystem/spark-load"
                     ]
-                }
+                },
+                "ecosystem/ecosystem-overview"
             ]
         },
         {
@@ -1116,7 +1136,8 @@
                                         
"sql-manual/basic-element/sql-data-types/semi-structured/MAP",
                                         
"sql-manual/basic-element/sql-data-types/semi-structured/STRUCT",
                                         
"sql-manual/basic-element/sql-data-types/semi-structured/JSON",
-                                        
"sql-manual/basic-element/sql-data-types/semi-structured/VARIANT"
+                                
"sql-manual/basic-element/sql-data-types/semi-structured/VARIANT",
+                                
"sql-manual/basic-element/sql-data-types/semi-structured/semi-structured-overview"
                                     ]
                                 },
                                 {
@@ -1397,7 +1418,11 @@
                                         
"sql-manual/sql-functions/scalar-functions/string-functions/url-decode",
                                         
"sql-manual/sql-functions/scalar-functions/string-functions/url-encode",
                                         
"sql-manual/sql-functions/scalar-functions/string-functions/uuid",
-                                        
"sql-manual/sql-functions/scalar-functions/string-functions/xpath-string"
+                                
"sql-manual/sql-functions/scalar-functions/string-functions/xpath-string",
+                                
"sql-manual/sql-functions/scalar-functions/string-functions/date",
+                                
"sql-manual/sql-functions/scalar-functions/string-functions/from-iso8601-date",
+                                
"sql-manual/sql-functions/scalar-functions/string-functions/to-iso8601",
+                                
"sql-manual/sql-functions/scalar-functions/string-functions/uuid-to-int"
                                     ]
                                 },
                                 {
@@ -1510,7 +1535,13 @@
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/yearweek",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-add",
                                         
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-diff",
-                                        
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-sub"
+                                
"sql-manual/sql-functions/scalar-functions/date-time-functions/years-sub",
+                                
"sql-manual/sql-functions/scalar-functions/date-time-functions/current-timestamp",
+                                
"sql-manual/sql-functions/scalar-functions/date-time-functions/localtime",
+                                
"sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-ceil",
+                                
"sql-manual/sql-functions/scalar-functions/date-time-functions/quarter-floor",
+                                
"sql-manual/sql-functions/scalar-functions/date-time-functions/quarters-diff",
+                                
"sql-manual/sql-functions/scalar-functions/date-time-functions/second-timestamp"
                                     ]
                                 },
                                 {
@@ -1733,7 +1764,11 @@
                                         
"sql-manual/sql-functions/scalar-functions/ip-functions/to-ipv4-or-null",
                                         
"sql-manual/sql-functions/scalar-functions/ip-functions/to-ipv6",
                                         
"sql-manual/sql-functions/scalar-functions/ip-functions/to-ipv6-or-default",
-                                        
"sql-manual/sql-functions/scalar-functions/ip-functions/to-ipv6-or-null"
+                                
"sql-manual/sql-functions/scalar-functions/ip-functions/to-ipv6-or-null",
+                                
"sql-manual/sql-functions/scalar-functions/ip-functions/inet-aton",
+                                
"sql-manual/sql-functions/scalar-functions/ip-functions/inet-ntoa",
+                                
"sql-manual/sql-functions/scalar-functions/ip-functions/inet6-aton",
+                                
"sql-manual/sql-functions/scalar-functions/ip-functions/inet6-ntoa"
                                     ]
                                 },
                                 {
@@ -1998,7 +2033,8 @@
                                 
"sql-manual/sql-functions/aggregate-functions/topn-weighted",
                                 
"sql-manual/sql-functions/aggregate-functions/var-samp",
                                 
"sql-manual/sql-functions/aggregate-functions/variance",
-                                
"sql-manual/sql-functions/aggregate-functions/window-funnel"
+                                
"sql-manual/sql-functions/aggregate-functions/window-funnel",
+                                
"sql-manual/sql-functions/window-functions/nth-value"
                             ]
                         },
                         {
@@ -2025,7 +2061,8 @@
                                 
"sql-manual/sql-functions/table-functions/explode-split",
                                 
"sql-manual/sql-functions/table-functions/explode-split-outer",
                                 
"sql-manual/sql-functions/table-functions/posexplode",
-                                
"sql-manual/sql-functions/table-functions/posexplode-outer"
+                                
"sql-manual/sql-functions/table-functions/posexplode-outer",
+                                
"sql-manual/sql-functions/table-functions/explode-json-object-outer"
                             ]
                         },
                         {
@@ -2129,7 +2166,8 @@
                                         
"sql-manual/sql-statements/data-modification/backup-and-restore/RESTORE",
                                         
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-RESTORE",
                                         
"sql-manual/sql-statements/data-modification/backup-and-restore/CANCEL-RESTORE",
-                                        
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-SNAPSHOT"
+                                        
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-SNAPSHOT",
+                                        
"sql-manual/sql-statements/data-modification/backup-and-restore/SHOW-BACKUP"
                                     ]
                                 }
                             ]
@@ -2379,7 +2417,8 @@
                                 
"sql-manual/sql-statements/statistics/SHOW-STATS",
                                 
"sql-manual/sql-statements/statistics/DROP-ANALYZE-JOB",
                                 
"sql-manual/sql-statements/statistics/KILL-ANALYZE-JOB",
-                                
"sql-manual/sql-statements/statistics/SHOW-ANALYZE"
+                                
"sql-manual/sql-statements/statistics/SHOW-ANALYZE",
+                                
"sql-manual/sql-statements/statistics/SHOW-QUEUED-ANZLYZE-JOBS"
                             ]
                         },
                         {
@@ -2408,7 +2447,8 @@
                                         
"sql-manual/sql-statements/cluster-management/instance-management/CANCEL-DECOMMISSION-BACKEND",
                                         
"sql-manual/sql-statements/cluster-management/instance-management/ADD-BROKER",
                                         
"sql-manual/sql-statements/cluster-management/instance-management/DROP-BROKER",
-                                        
"sql-manual/sql-statements/cluster-management/instance-management/SHOW-BROKER"
+                                        
"sql-manual/sql-statements/cluster-management/instance-management/SHOW-BROKER",
+                                        
"sql-manual/sql-statements/cluster-management/instance-management/ALTER-SYSTEM-RENAME-COMPUTE-GROUP"
                                     ]
                                 },
                                 {
@@ -2445,7 +2485,8 @@
                                         
"sql-manual/sql-statements/cluster-management/storage-management/WARM-UP",
                                         
"sql-manual/sql-statements/cluster-management/storage-management/CANCEL-WARM-UP",
                                         
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-WARM-UP-JOB",
-                                        
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT"
+                                        
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-CACHE-HOTSPOT",
+                                        
"sql-manual/sql-statements/cluster-management/storage-management/SHOW-STORAGE-POLICY-USING"
                                     ]
                                 }
                             ]
@@ -2635,4 +2676,4 @@
             ]
         }
     ]
-}
+}
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to