This is an automated email from the ASF dual-hosted git repository.
kassiez pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git
The following commit(s) were added to refs/heads/master by this push:
new 370eeeb56cc Update Cisco blog (#2386)
370eeeb56cc is described below
commit 370eeeb56cc930162e4ffe2bf59fa672288f4473
Author: KassieZ <[email protected]>
AuthorDate: Thu May 15 17:54:15 2025 +0800
Update Cisco blog (#2386)
## Versions
- [ ] dev
- [ ] 3.0
- [ ] 2.1
- [ ] 2.0
## Languages
- [ ] Chinese
- [ ] English
## Docs Checklist
- [ ] Checked by AI
- [ ] Test Cases Built
---
...doris-supercharges-cisco-webex-data-platform.md | 146 +++++++++++++++++++++
blog/release-note-2.1.9.md | 2 -
...tencent-music-migrate-elasticsearch-to-doris.md | 4 +-
docs/observability/overview.mdx | 2 +-
.../current/log-storage-analysis.md | 2 +-
.../current/observability/log-storage-analysis.md | 2 +-
.../current/observability/log.md | 2 +-
.../current/releasenotes/v2.1/release-2.1.0.md | 2 +-
.../version-1.2/releasenotes/v2.1/release-2.1.0.md | 2 +-
.../version-2.0/log-storage-analysis.md | 2 +-
.../version-2.0/releasenotes/v2.1/release-2.1.0.md | 2 +-
.../version-2.1/log-storage-analysis.md | 2 +-
.../observability/log-storage-analysis.md | 2 +-
.../version-2.1/observability/log.md | 2 +-
.../version-2.1/releasenotes/v2.1/release-2.1.0.md | 2 +-
.../version-3.0/log-storage-analysis.md | 2 +-
.../observability/log-storage-analysis.md | 2 +-
.../version-3.0/observability/log.md | 2 +-
.../version-3.0/releasenotes/v2.1/release-2.1.0.md | 2 +-
src/components/recent-blogs/recent-blogs.data.ts | 16 +--
src/constant/newsletter.data.ts | 16 +--
...ilures-and-higher-reliability-new-solutions.png | Bin 0 -> 143611 bytes
...oris-supercharges-cisco-webex-data-platform.jpg | Bin 0 -> 302495 bytes
.../less-failures-and-higher-reliability.png | Bin 0 -> 145085 bytes
...form-1.0-replace-Trino-Kyuubi-Pinot-Iceberg.png | Bin 0 -> 68643 bytes
.../cisco-webex/platform-2.0-Apache-Doris.png | Bin 0 -> 68218 bytes
.../unified-access-control-new-solution.png | Bin 0 -> 71791 bytes
.../unified-access-control-old-solution.png | Bin 0 -> 281349 bytes
.../unity-drives-efficiency-new-solution.png | Bin 0 -> 101871 bytes
.../unity-drives-efficiency-old-solution.png | Bin 0 -> 310450 bytes
...oris-supercharges-cisco-webex-data-platform.jpg | Bin 0 -> 302495 bytes
static/images/observability/studio-discover.jpeg | Bin 0 -> 831141 bytes
.../version-2.1/observability/overview.mdx | 2 +-
.../practical-guide/log-storage-analysis.md | 2 +-
.../version-3.0/observability/overview.mdx | 2 +-
35 files changed, 183 insertions(+), 39 deletions(-)
diff --git a/blog/doris-supercharges-cisco-webex-data-platform.md
b/blog/doris-supercharges-cisco-webex-data-platform.md
new file mode 100644
index 00000000000..88bc2ec5b91
--- /dev/null
+++ b/blog/doris-supercharges-cisco-webex-data-platform.md
@@ -0,0 +1,146 @@
+---
+{
+ 'title': 'How Apache Doris supercharges Cisco WebEx’s data platform',
+ 'summary': 'Cisco runs five Doris clusters (dozens of nodes) for WebEx,
handling 100,000+ queries per day, and 5TB+ daily real-time data ingestion.',
+ 'description': 'Cisco runs five Doris clusters (dozens of nodes) for
WebEx, handling 100,000+ queries per day, and 5TB+ daily real-time data
ingestion.',
+ 'date': '2025-05-15',
+ 'author': 'Apache Doris',
+ 'tags': ['Best Practice'],
+ 'picked': "true",
+ 'order': "1",
+ "image":
'/images/blogs/cisco-webex/doris-supercharges-cisco-webex-data-platform.jpg'
+}
+---
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements. See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership. The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License. You may obtain a copy of the License at
+ http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied. See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Cisco WebEx is one of the world’s leading real-time conferencing platforms. It
is trusted by over 95% of Fortune 500 companies and supports more than 1.5
million meetings daily.
+
+The growing user base and data volume drives WebEx to build a data platform
with stronger capabilities.
+
+It has replaced its complex, multi-system architecture (Trino, Pinot, Iceberg,
Kyuubi) with a unified solution based on **[Apache
Doris](https://doris.apache.org)**. Doris now powers both its data lakehouse
and query engine, improving performance and stability while reducing costs by
30%. This new architecture already supports critical projects in Cisco like CCA
Peak Ports, dashboards, and unified authentication.
+
+## Why Cisco turned to Apache Doris
+
+It all started with their old data architecture.
+
+### Platform 1.0: Trino, Kyuubi, Pinot, Iceberg
+
+Previously, WebEx used Kafka for data ingestion, a Unified Data Platform (UDP)
to schedule Spark and Flink jobs, and Iceberg for data management. Queries were
served by Trino and Kyuubi, while Pinot handled OLAP.
+
+
+
+While this setup worked, its complexity gave birth to issues like:
+
+- **Maintenance difficulty**: Maintaining multiple databases simultaneously
made operations complex and error-prone.
+- **Poor resource utilization**: Multiple systems led to data redundancy and
scattered query entry points, so CPU and memory were often underused or
inefficiently allocated.
+- **Data inconsistency**: Inconsistent calculations across different systems
produced conflicting results, which were frustrating.
+- **Data governance challenges**: The fragmented metadata sources and varied
formats made it difficult to ensure accuracy, consistency, and trust across the
platform.
+
+Given these challenges, the most urgent need for Cisco was to consolidate
their technology stack and reduce system complexity.
+
+### Platform 2.0: Apache Doris
+
+After evaluating several solutions, they found Apache Doris to be an ideal fit
because it offers data lakehouse capabilities through its
[Multi-Catalog](https://doris.apache.org/docs/lakehouse/lakehouse-overview#multi-catalog)
feature. Multi-Catalog enables unified analytics across diverse data sources
(including Hive, Iceberg, Hudi, Paimon, Elasticsearch, MySQL, Oracle, and SQL
Server) without physically centralizing the data.
+
+So they replace Apache Iceberg with Apache Doris as the data lakehouse, and
also use Doris as the unified analytics engine instead of the combination of
Trino, Kyuubi, and Pinot.
+
+
+
+Apache Doris can query data in place without moving it. This eliminates data
transfers and unlocks real-time analytics. The benefits are clear:
+
+- There are less dependency chains and integration overhead.
+- Complex ETL and Spark Load processes are replaced by Doris' [Routine
Load](https://doris.apache.org/docs/data-operate/import/import-way/routine-load-manual),
where Doris directly and continuously consumes data from Kafka.
+- A single Doris cluster now replaces multiple legacy systems, removing
redundant storage and improving CPU and memory utilization. **As a result,
infrastructure costs are cut by 30%.**
+- Fewer moving parts mean fewer points of failure. Simplified architecture
enhances system stability and reduces the burden on engineering teams.
+
+While we’ve covered the technical wins from the architecture overhaul, let’s
not forget that data architecture exists to serve the business. **So how has
this transformation actually moved the needle for Cisco’s business?**
+
+## Unity drives efficiency
+
+### Fresher data and faster report generation
+
+The CCA Peak Ports project in Cisco is designed to generate reconciliation
reports between WebEx and its partners based on the Peak Ports billing model.
+
+The Apache Doris–based transformation has simplified the data processing
pipeline. As a result:
+
+- **Data freshness: The report is updated the next day instead of two days
later.**
+- **Query performance: A report can be generated within 5 minutes instead of
10 minutes.**
+
+**Old solution**
+
+The old system relied on raw tables in an Oracle database as the data source.
A series of stored procedures were executed to generate intermediate results.
Then, a scheduled task written in Java further processed these intermediate
results and wrote the final output to a Kafka message queue. Finally, a Spark
job synchronized the data from Kafka to Iceberg to provide report services.
+
+
+
+
+**New solution**
+
+All data is pre-stored in Kafka. Then, using the [Doris Kafka
Connector](https://doris.apache.org/docs/ecosystem/doris-kafka-connector) and
[Routine
Load](https://doris.apache.org/docs/data-operate/import/import-way/routine-load-manual),
data is directly ingested from Kafka topics into Doris, where it is integrated
into detailed tables to form the DWD (Data Warehouse Detail) layer.
Pre-scheduled Spark jobs then perform deep analysis and transformation on the
DWD data. The final results a [...]
+
+
+
+### Less failures and higher reliability
+
+Cisco designs a dashboard system to provide an overview of data governance,
with a particular focus on the WebEx data asset landscape and related
analytical metrics. It serves as a data foundation to support business
decision-making for the management team.
+
+**Old solution**
+
+In the early stages of the data governance platform, the system relied on
scheduled Spark jobs to extract data for schema analysis and lineage analysis.
The results were then sent to Kafka, and subsequently ingested in real time by
Pinot for further processing and visualization. However, data exchange and
synchronization across multiple components led to additional overhead and
latency.
+
+
+
+**New solution**
+
+Pinot is replaced by Apache Doris. Leveraging the Multi-Catalog capability in
Apache Doris, data is extracted from each engine through scheduled Doris tasks,
and then written into primary key tables and aggregate tables.
+
+This approach eliminates the need to maintain 11 separate Spark jobs, allowing
the entire data pipeline to be created and managed within Doris.
+
+In addition, the new architecture reduces dependency on CPU and memory
resources previously required by the UDP, avoiding job failures caused by
occasional resource constraints. Compared to Pinot, Doris consistently consumes
fewer resources for the same queries, thereby improving reliability and
stability of results.
+
+
+
+### Unified access control: one platform instead of three
+
+**Old solution**
+
+In the early authentication and authorization setup, problems exist such as
fragmented query entry points and varying query complexity across users. This
not only led to inefficient resource utilization, but also increased the risk
of resource-intensive queries degrading the performance of others.
+
+From the user’s perspective, additional friction came from the need to manage
connections across multiple systems, each with inconsistent password update
cycles. As a result, users frequently had to reapply for authentication and
authorization, consuming significant time and effort.
+
+
+
+**New solution**
+
+The new system adopts Doris as a unified query service and enables centralized
data access across multiple engines, including Trino, Iceberg, and Pinot.
+
+To enhance usability, a Querybook service was introduced for all users,
providing a consistent interface for querying data from the data lake.
Additionally, a unified authentication and authorization service, Web Auth, was
built on top of Apache Ranger, and integrated with Doris for seamless access
control.
+
+Previously, users and administrators had to request and approve permissions
across three separate platforms (LDAP, Ranger, and the database). Now, access
is managed centrally through WAP Auth.
+
+Furthermore, a SQL Ruleset Module was developed within Web Auth to synchronize
rule definitions with Doris. This enables interception of high-risk SQL
queries, helping prevent potential resource abuse.
+
+
+
+## Use case summary
+
+Cisco currently operates **five Doris clusters** with **dozens of nodes** for
its WebEx data platform, which supports an average of over **100,000 queries
per day** for online services, with daily real-time data ingestion reaching
more than **5TB**.
+
+The adoption of Doris has not only contributed to cost reduction and
efficiency gains, but has also driven broader exploration in the platform’s
architecture and business expansion strategy. These include gradually migrating
more business and application-layer workloads from their old data lakehouse
into Doris, replacing self-managed analytic storage solutions such as TiDB and
Kylin with Doris, and exploring emerging use cases such as AI on Doris and
Doris on Paimon.
+
+If you're looking to integrate Apache Doris into your data architecture and
leverage its powerful capabilities, [join the
community](https://join.slack.com/t/apachedoriscommunity/shared_invite/zt-2gmq5o30h-455W226d79zP3L96ZhXIoQ)
for discussions, advice, and technical support!
\ No newline at end of file
diff --git a/blog/release-note-2.1.9.md b/blog/release-note-2.1.9.md
index a1bef313559..6ce0eab5be6 100644
--- a/blog/release-note-2.1.9.md
+++ b/blog/release-note-2.1.9.md
@@ -6,8 +6,6 @@
'date': '2025-04-03',
'author': 'Apache Doris',
'tags': ['Release Notes'],
- 'picked': "true",
- 'order': "3",
"image": '/images/2.1.9.jpg'
}
---
diff --git a/blog/tencent-music-migrate-elasticsearch-to-doris.md
b/blog/tencent-music-migrate-elasticsearch-to-doris.md
index 9c41ea9fd35..5554bc13e08 100644
--- a/blog/tencent-music-migrate-elasticsearch-to-doris.md
+++ b/blog/tencent-music-migrate-elasticsearch-to-doris.md
@@ -5,9 +5,9 @@
'description': 'Handle full-text search, audience segmentation, and
aggregation analysis directly within Apache Doris and slash their storage costs
by 80% while boosting write performance by 4x',
'date': '2025-04-17',
'author': 'Apache Doris',
- 'tags': ['Best Practices'],
+ 'tags': ['Best Practice'],
'picked': "true",
- 'order': "1",
+ 'order': "2",
"image": '/images/tencent-music-migrate-elasticsearch-to-doris.jpg'
}
---
diff --git a/docs/observability/overview.mdx b/docs/observability/overview.mdx
index a8cab06e7c1..1ac5eaffbe5 100644
--- a/docs/observability/overview.mdx
+++ b/docs/observability/overview.mdx
@@ -168,4 +168,4 @@ Grafana connects to Doris via MySQL datasource, offering
unified visualization a
While Grafana's log visualization and analysis capabilities are relatively
basic compared to Kibana, third-party vendors have implemented Kibana-like
Discover features. These will soon be integrated into Grafana's Doris
datasource, enhancing unified observability visualization. Future enhancements
will include Elasticsearch protocol compatibility, enabling native Kibana
connections to Doris. For ELK users, replacing Elasticsearch with Doris
maintains existing logging and visualization ha [...]
-
+
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/log-storage-analysis.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/log-storage-analysis.md
index 64d1ef05dcf..78e6bd25011 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/log-storage-analysis.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/log-storage-analysis.md
@@ -215,7 +215,7 @@ Apache Doris 对 Flexible Schema 的日志数据提供了几个方面的支持
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/observability/log-storage-analysis.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/observability/log-storage-analysis.md
index 498806cf460..14d41592983 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/observability/log-storage-analysis.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/observability/log-storage-analysis.md
@@ -134,7 +134,7 @@ under the License.
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/observability/log.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/observability/log.md
index da5b0189bff..efb467875b1 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/observability/log.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/observability/log.md
@@ -143,7 +143,7 @@ under the License.
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/releasenotes/v2.1/release-2.1.0.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/releasenotes/v2.1/release-2.1.0.md
index e67d05552ea..9f69e3cbd8b 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/releasenotes/v2.1/release-2.1.0.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/releasenotes/v2.1/release-2.1.0.md
@@ -408,7 +408,7 @@ PROPERTIES (
:::note
-参考文档:[数据划分](../../table-design/data-partitioning/basic-concepts.mdx)
+参考文档:[数据划分](../../table-design/data-partitioning/data-distribution)
:::
### INSERT INTO SELECT 导入性能提升 100%
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/releasenotes/v2.1/release-2.1.0.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/releasenotes/v2.1/release-2.1.0.md
index 307e55d8be7..c0be9b4cc24 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/releasenotes/v2.1/release-2.1.0.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.2/releasenotes/v2.1/release-2.1.0.md
@@ -408,7 +408,7 @@ PROPERTIES (
:::note
-参考文档:[数据划分](./table-design/data-partitioning/basic-concepts.mdx)
+参考文档:[数据划分](./table-design/data-partitioning/data-distribution)
:::
### INSERT INTO SELECT 导入性能提升 100%
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/log-storage-analysis.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/log-storage-analysis.md
index 2d3290abff2..80a0c89905a 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/log-storage-analysis.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/log-storage-analysis.md
@@ -215,7 +215,7 @@ Apache Doris 对 Flexible Schema 的日志数据提供了几个方面的支持
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/releasenotes/v2.1/release-2.1.0.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/releasenotes/v2.1/release-2.1.0.md
index 5aa0548e4e1..f328290eeaf 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/releasenotes/v2.1/release-2.1.0.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/releasenotes/v2.1/release-2.1.0.md
@@ -408,7 +408,7 @@ PROPERTIES (
:::note
-参考文档:[数据划分](./table-design/data-partitioning/basic-concepts.mdx)
+参考文档:[数据划分](./table-design/data-partitioning/data-distribution)
:::
### INSERT INTO SELECT 导入性能提升 100%
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
index 498806cf460..14d41592983 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/log-storage-analysis.md
@@ -134,7 +134,7 @@ under the License.
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/observability/log-storage-analysis.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/observability/log-storage-analysis.md
index 498806cf460..14d41592983 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/observability/log-storage-analysis.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/observability/log-storage-analysis.md
@@ -134,7 +134,7 @@ under the License.
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/observability/log.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/observability/log.md
index da5b0189bff..efb467875b1 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/observability/log.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/observability/log.md
@@ -143,7 +143,7 @@ under the License.
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/releasenotes/v2.1/release-2.1.0.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/releasenotes/v2.1/release-2.1.0.md
index 2caf92bc267..13b075e9a40 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/releasenotes/v2.1/release-2.1.0.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.1/releasenotes/v2.1/release-2.1.0.md
@@ -408,7 +408,7 @@ PROPERTIES (
:::note
-参考文档:[数据划分](../../table-design/data-partitioning/basic-concepts.mdx)
+参考文档:[数据划分](../../table-design/data-partitioning/data-distribution)
:::
### INSERT INTO SELECT 导入性能提升 100%
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/log-storage-analysis.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/log-storage-analysis.md
index 498806cf460..14d41592983 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/log-storage-analysis.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/log-storage-analysis.md
@@ -134,7 +134,7 @@ under the License.
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/observability/log-storage-analysis.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/observability/log-storage-analysis.md
index 498806cf460..14d41592983 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/observability/log-storage-analysis.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/observability/log-storage-analysis.md
@@ -134,7 +134,7 @@ under the License.
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/observability/log.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/observability/log.md
index da5b0189bff..efb467875b1 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/observability/log.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/observability/log.md
@@ -143,7 +143,7 @@ under the License.
- 分桶数量大致为集群磁盘总数的 3 倍,每个桶的数据量压缩后 5GB 左右。
- 使用 Random 策略 (`DISTRIBUTED BY RANDOM BUCKETS 60`),配合写入时的 Single Tablet
导入,可以提升批量(Batch)写入的效率。
-更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/basic-concepts.mdx)。
+更多关于分区分桶的信息,可参考 [数据划分](./table-design/data-partitioning/data-distribution)。
**配置压缩参数**
- 使用 zstd 压缩算法 (`"compression" = "zstd"`), 提高数据压缩率。
diff --git
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/releasenotes/v2.1/release-2.1.0.md
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/releasenotes/v2.1/release-2.1.0.md
index 3ab7ea39944..3ff6484b7f2 100644
---
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/releasenotes/v2.1/release-2.1.0.md
+++
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-3.0/releasenotes/v2.1/release-2.1.0.md
@@ -408,7 +408,7 @@ PROPERTIES (
:::note
-参考文档:[数据划分](../../table-design/data-partitioning/basic-concepts.mdx)
+参考文档:[数据划分](../../table-design/data-partitioning/data-distribution)
:::
### INSERT INTO SELECT 导入性能提升 100%
diff --git a/src/components/recent-blogs/recent-blogs.data.ts
b/src/components/recent-blogs/recent-blogs.data.ts
index 710503a5d9d..cb4abe68bd9 100644
--- a/src/components/recent-blogs/recent-blogs.data.ts
+++ b/src/components/recent-blogs/recent-blogs.data.ts
@@ -1,19 +1,19 @@
export const RECENT_BLOGS_POSTS = [
{
- label: `Apache Doris 2.1.9 Released`,
- link: 'https://doris.apache.org/blog/release-note-3.0.4',
+ label: `Apache Doris 3.0.5 Released`,
+ link: 'https://doris.apache.org/blog/release-note-3.0.5',
},
{
- label: 'Why Apache Doris is a Better Alternative to Elasticsearch for
Real-Time Analytics',
- link:
'https://doris.apache.org/blog/why-apache-doris-is-best-alternatives-for-real-time-analytics',
+ label: 'How Tencent Music saved 80% in costs by migrating from
Elasticsearch to Apache Doris',
+ link:
'https://doris.apache.org/blog/tencent-music-migrate-elasticsearch-to-doris',
},
{
- label: 'Automatic and flexible data sharding: Auto Partition in Apache
Doris',
- link: 'https://doris.apache.org/blog/auto-partition-in-apache-doris',
+ label: 'Slash your cost by 90% with Apache Doris Compute-Storage
Decoupled Mode',
+ link: 'https://doris.apache.org/blog/doris-compute-storage-decoupled',
},
{
- label: 'Migrate data lakehouse from BigQuery to Apache Doris, saving
$4,500 per month',
- link:
'https://doris.apache.org/blog/migrate-lakehouse-from-bigquery-to-doris',
+ label: 'Why Apache Doris is a Better Alternative to Elasticsearch for
Real-Time Analytics',
+ link:
'https://doris.apache.org/blog/why-apache-doris-is-best-alternatives-for-real-time-analytics',
},
diff --git a/src/constant/newsletter.data.ts b/src/constant/newsletter.data.ts
index 17be4bdbfea..4b3da9c6ef8 100644
--- a/src/constant/newsletter.data.ts
+++ b/src/constant/newsletter.data.ts
@@ -1,4 +1,11 @@
export const NEWSLETTER_DATA = [
+ {
+ tags: ['Best Practice'],
+ title: "How Apache Doris supercharges Cisco WebEx’s data platform",
+ content: `Cisco runs five Doris clusters (dozens of nodes) for WebEx,
handling 100,000+ queries per day, and 5TB+ daily real-time data ingestion.`,
+ to: '/blog/doris-supercharges-cisco-webex-data-platform',
+ image:
'blogs/cisco-webex/doris-supercharges-cisco-webex-data-platform.jpg',
+ },
{
tags: ['Release Note'],
title: "Apache Doris 3.0.5 Released",
@@ -18,14 +25,7 @@ export const NEWSLETTER_DATA = [
content: `Apache Doris compute-storage decoupled mode achieves 90%
cost reduction and provides elasticity and workload isolation, while
maintaining high performance in data ingestion and queries.`,
to: '/blog/doris-compute-storage-decoupled',
image: 'compute-storage-decoupled-banner.jpg',
- },
- {
- tags: ['Tech Sharing'],
- title: "Why Apache Doris is a Better Alternative to Elasticsearch for
Real-Time Analytics",
- content: `The comparison in this post will focus on the real-time
analytics capabilities of Apache Doris and Elasticsearch from a user-oriented
perspective`,
- to:
'/blog/why-apache-doris-is-best-alternatives-for-real-time-analytics',
- image: 'es-alternatives/Alternative-to-Elasticsearch.jpg',
- },
+ }
];
\ No newline at end of file
diff --git
a/static/images/blogs/cisco-webex/Less-failures-and-higher-reliability-new-solutions.png
b/static/images/blogs/cisco-webex/Less-failures-and-higher-reliability-new-solutions.png
new file mode 100644
index 00000000000..3aeef4ca522
Binary files /dev/null and
b/static/images/blogs/cisco-webex/Less-failures-and-higher-reliability-new-solutions.png
differ
diff --git
a/static/images/blogs/cisco-webex/doris-supercharges-cisco-webex-data-platform.jpg
b/static/images/blogs/cisco-webex/doris-supercharges-cisco-webex-data-platform.jpg
new file mode 100644
index 00000000000..88534d4b6ed
Binary files /dev/null and
b/static/images/blogs/cisco-webex/doris-supercharges-cisco-webex-data-platform.jpg
differ
diff --git
a/static/images/blogs/cisco-webex/less-failures-and-higher-reliability.png
b/static/images/blogs/cisco-webex/less-failures-and-higher-reliability.png
new file mode 100644
index 00000000000..aa910dcd6fc
Binary files /dev/null and
b/static/images/blogs/cisco-webex/less-failures-and-higher-reliability.png
differ
diff --git
a/static/images/blogs/cisco-webex/platform-1.0-replace-Trino-Kyuubi-Pinot-Iceberg.png
b/static/images/blogs/cisco-webex/platform-1.0-replace-Trino-Kyuubi-Pinot-Iceberg.png
new file mode 100644
index 00000000000..b31616d62a0
Binary files /dev/null and
b/static/images/blogs/cisco-webex/platform-1.0-replace-Trino-Kyuubi-Pinot-Iceberg.png
differ
diff --git a/static/images/blogs/cisco-webex/platform-2.0-Apache-Doris.png
b/static/images/blogs/cisco-webex/platform-2.0-Apache-Doris.png
new file mode 100644
index 00000000000..612e3af1e46
Binary files /dev/null and
b/static/images/blogs/cisco-webex/platform-2.0-Apache-Doris.png differ
diff --git
a/static/images/blogs/cisco-webex/unified-access-control-new-solution.png
b/static/images/blogs/cisco-webex/unified-access-control-new-solution.png
new file mode 100644
index 00000000000..03e95203c97
Binary files /dev/null and
b/static/images/blogs/cisco-webex/unified-access-control-new-solution.png differ
diff --git
a/static/images/blogs/cisco-webex/unified-access-control-old-solution.png
b/static/images/blogs/cisco-webex/unified-access-control-old-solution.png
new file mode 100644
index 00000000000..8a915161dee
Binary files /dev/null and
b/static/images/blogs/cisco-webex/unified-access-control-old-solution.png differ
diff --git
a/static/images/blogs/cisco-webex/unity-drives-efficiency-new-solution.png
b/static/images/blogs/cisco-webex/unity-drives-efficiency-new-solution.png
new file mode 100644
index 00000000000..63710d1a064
Binary files /dev/null and
b/static/images/blogs/cisco-webex/unity-drives-efficiency-new-solution.png
differ
diff --git
a/static/images/blogs/cisco-webex/unity-drives-efficiency-old-solution.png
b/static/images/blogs/cisco-webex/unity-drives-efficiency-old-solution.png
new file mode 100644
index 00000000000..a13905713fb
Binary files /dev/null and
b/static/images/blogs/cisco-webex/unity-drives-efficiency-old-solution.png
differ
diff --git a/static/images/doris-supercharges-cisco-webex-data-platform.jpg
b/static/images/doris-supercharges-cisco-webex-data-platform.jpg
new file mode 100644
index 00000000000..88534d4b6ed
Binary files /dev/null and
b/static/images/doris-supercharges-cisco-webex-data-platform.jpg differ
diff --git a/static/images/observability/studio-discover.jpeg
b/static/images/observability/studio-discover.jpeg
new file mode 100644
index 00000000000..7703e0a3408
Binary files /dev/null and b/static/images/observability/studio-discover.jpeg
differ
diff --git a/versioned_docs/version-2.1/observability/overview.mdx
b/versioned_docs/version-2.1/observability/overview.mdx
index 7d2709d415a..d618983a8df 100644
--- a/versioned_docs/version-2.1/observability/overview.mdx
+++ b/versioned_docs/version-2.1/observability/overview.mdx
@@ -167,4 +167,4 @@ Grafana connects to Doris via MySQL datasource, offering
unified visualization a
While Grafana's log visualization and analysis capabilities are relatively
basic compared to Kibana, third-party vendors have implemented Kibana-like
Discover features. These will soon be integrated into Grafana's Doris
datasource, enhancing unified observability visualization. Future enhancements
will include Elasticsearch protocol compatibility, enabling native Kibana
connections to Doris. For ELK users, replacing Elasticsearch with Doris
maintains existing logging and visualization ha [...]
-
+
diff --git a/versioned_docs/version-2.1/practical-guide/log-storage-analysis.md
b/versioned_docs/version-2.1/practical-guide/log-storage-analysis.md
index 9c1a9ed7de5..dddd69b9f4d 100644
--- a/versioned_docs/version-2.1/practical-guide/log-storage-analysis.md
+++ b/versioned_docs/version-2.1/practical-guide/log-storage-analysis.md
@@ -243,7 +243,7 @@ Due to the distinct characteristics of both writing and
querying log data, it is
- Use the Random strategy (`DISTRIBUTED BY RANDOM BUCKETS 60`) to optimize
batch writing efficiency when paired with single tablet imports.
-For more information, refer to [Data
Partitioning](../table-design/data-partitioning/basic-concepts.mdx).
+For more information, refer to [Data
Partitioning](../table-design/data-partitioning/data-distribution).
**Configure compression parameters**
diff --git a/versioned_docs/version-3.0/observability/overview.mdx
b/versioned_docs/version-3.0/observability/overview.mdx
index 7d2709d415a..d618983a8df 100644
--- a/versioned_docs/version-3.0/observability/overview.mdx
+++ b/versioned_docs/version-3.0/observability/overview.mdx
@@ -167,4 +167,4 @@ Grafana connects to Doris via MySQL datasource, offering
unified visualization a
While Grafana's log visualization and analysis capabilities are relatively
basic compared to Kibana, third-party vendors have implemented Kibana-like
Discover features. These will soon be integrated into Grafana's Doris
datasource, enhancing unified observability visualization. Future enhancements
will include Elasticsearch protocol compatibility, enabling native Kibana
connections to Doris. For ELK users, replacing Elasticsearch with Doris
maintains existing logging and visualization ha [...]
-
+
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]