This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch ci-rename-to-apache
in repository https://gitbox.apache.org/repos/asf/fluss.git

commit a6767d321a30eed5ef193f48bc1dcd93fe7dbd8a
Author: Jark Wu <[email protected]>
AuthorDate: Mon Aug 25 19:30:10 2025 +0800

    fix broken url links
---
 website/blog/2024-11-29-fluss-open-source.md | 2 +-
 website/blog/2025-06-01-partial-updates.md   | 4 ++--
 website/blog/releases/0.7.md                 | 4 ++--
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/website/blog/2024-11-29-fluss-open-source.md 
b/website/blog/2024-11-29-fluss-open-source.md
index ec8d307a0..3e3effb41 100644
--- a/website/blog/2024-11-29-fluss-open-source.md
+++ b/website/blog/2024-11-29-fluss-open-source.md
@@ -40,7 +40,7 @@ Make sure to keep an eye on the project, give it a try and if 
you like it, don
 
 ### Getting Started
 - Visit the [GitHub repository](https://github.com/apache/fluss).
-- Check out the [quickstart guide](/docs/quickstart/flink.md).
+- Check out the [quickstart guide](/docs/quickstart/flink/).
 
 ### Additional Resources
 - Announcement Blog Post: [Introducing Fluss: Unified Streaming Storage For 
Next-Generation Data 
Analytics](https://www.ververica.com/blog/introducing-fluss)
diff --git a/website/blog/2025-06-01-partial-updates.md 
b/website/blog/2025-06-01-partial-updates.md
index 08de6d53c..55317235e 100644
--- a/website/blog/2025-06-01-partial-updates.md
+++ b/website/blog/2025-06-01-partial-updates.md
@@ -265,7 +265,7 @@ Flink SQL> SELECT * FROM user_rec_wide;
 
 Now let's switch to `batch` mode and query the current snapshot of the 
`user_rec_wide` table.
 
-But before that, let's start the [Tiering 
Service](/docs/maintenance/tiered-storage/lakehouse-storage.md#start-the-datalake-tiering-service)
 that allows offloading the tables as `Lakehouse` tables.
+But before that, let's start the [Tiering 
Service](/docs/maintenance/tiered-storage/lakehouse-storage/#start-the-datalake-tiering-service)
 that allows offloading the tables as `Lakehouse` tables.
 
 **Step 7:** Open a new terminal 💻 in the `Coordinator Server` and run the 
following command to start the `Tiering Service`:
 ```shell
@@ -297,7 +297,7 @@ Flink SQL> SELECT * FROM user_rec_wide;
 ### Conclusion
 Partial updates in Fluss enable an alternative approach in how we design 
streaming data pipelines for enriching or joining data. 
 
-When all your sources share a primary key - otherwise you can mix & match 
[streaming lookup joins](/docs/engine-flink/lookups.md#lookup) - you can turn 
the problem on its head: update a unified table incrementally, rather than 
joining streams on the fly.
+When all your sources share a primary key - otherwise you can mix & match 
[streaming lookup joins](/docs/engine-flink/lookups/#lookup) - you can turn the 
problem on its head: update a unified table incrementally, rather than joining 
streams on the fly.
 
 The result is a more scalable, maintainable, and efficient pipeline. 
 Engineers can spend less time wrestling with Flink’s state, checkpoints and 
join mechanics, and more time delivering fresh, integrated data to power 
real-time analytics and applications. 
diff --git a/website/blog/releases/0.7.md b/website/blog/releases/0.7.md
index ce57ed808..a8ebf1bc0 100644
--- a/website/blog/releases/0.7.md
+++ b/website/blog/releases/0.7.md
@@ -155,7 +155,7 @@ DataStreamSource<Order> stream = env.fromSource(
 );
 ```
 
-For usage examples and configuration parameters, see the [DataStream Connector 
documentation](/docs/engine-flink/datastream.md).
+For usage examples and configuration parameters, see the [DataStream Connector 
documentation](/docs/engine-flink/datastream/).
 
 
 ## Fluss Java Client
@@ -164,7 +164,7 @@ In this version, we officially release the Fluss Java 
Client, a client library d
 * **Table API:** For table-based data operations, supporting streaming 
reads/writes, updates, deletions, and point queries.
 * **Admin API:** For metadata management, including cluster management, table 
lifecycle, and access control.
 
-The client supports forward and backward compatibility, ensuring smooth 
upgrades across Fluss versions. With the Fluss Java Client, developers can 
build online applications and data ingestion services based on Fluss, as well 
as enterprise-level components such as Fluss management platforms and 
operations monitoring systems. For detailed usage instructions, please refer to 
the official documentation: [Fluss Java Client User 
Guide](/docs/apis/java-client.md).
+The client supports forward and backward compatibility, ensuring smooth 
upgrades across Fluss versions. With the Fluss Java Client, developers can 
build online applications and data ingestion services based on Fluss, as well 
as enterprise-level components such as Fluss management platforms and 
operations monitoring systems. For detailed usage instructions, please refer to 
the official documentation: [Fluss Java Client User 
Guide](/docs/apis/java-client/).
 
 Fluss uses Apache Arrow as its underlying storage format, enabling efficient 
cross-language extensions. A **Fluss Python Client** is planned for future 
releases, leveraging the rich ecosystem of **PyArrow** to integrate with 
popular data analysis tools such as **Pandas** and **DuckDB**. 
 This will further lower the barrier for real-time data exploration and 
analytics.

Reply via email to