This is an automated email from the ASF dual-hosted git repository.

xtsong pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 416e258d53b0354666d464a851cc6f72c18424a7
Author: Xintong Song <[email protected]>
AuthorDate: Thu Dec 25 15:26:32 2025 +0800

    Fix broken image links in the dynamic iceberg sink blogpost
    
    This closes #820
---
 docs/content/posts/2025-10-14-kafka-dynamic-iceberg-sink.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/content/posts/2025-10-14-kafka-dynamic-iceberg-sink.md 
b/docs/content/posts/2025-10-14-kafka-dynamic-iceberg-sink.md
index 2c9db40dc..6faf8e162 100644
--- a/docs/content/posts/2025-10-14-kafka-dynamic-iceberg-sink.md
+++ b/docs/content/posts/2025-10-14-kafka-dynamic-iceberg-sink.md
@@ -21,7 +21,7 @@ In this post, we'll guide you through building this exact 
system. We will start
 Let's start with the basics. Our goal is to get data from a single Kafka topic 
into a corresponding Iceberg table.
 
 <div style="text-align: center;">
-<img 
src="/img/blog/2025-10-03-kafka-dynamic-iceberg-sink/simple-single-kafka-topic-to-iceberg.png"
 style="width:70%;margin:15px">
+<img 
src="/img/blog/2025-10-14-kafka-dynamic-iceberg-sink/simple-single-kafka-topic-to-iceberg.png"
 style="width:70%;margin:15px">
 </div>
 
 ### How to write to an Iceberg table with Flink
@@ -46,7 +46,7 @@ This setup is simple, robust, and works perfectly for a 
single topic with a stab
 Now, what if we have thousands of topics? The logical next step is to create a 
dedicated processing graph (or DAG) for each topic-to-table mapping within a 
single Flink application.
 
 <div style="text-align: center;">
-<img 
src="/img/blog/2025-10-03-kafka-dynamic-iceberg-sink/multiple-dag-pipeline.png" 
style="width:70%;margin:15px">
+<img 
src="/img/blog/2025-10-14-kafka-dynamic-iceberg-sink/multiple-dag-pipeline.png" 
style="width:70%;margin:15px">
 </div>
 
 This looks good, but this static architecture cannot adapt to the changes: an 
Iceberg sink can only write to **one predefined table**, the table must **exist 
beforehand**, and its **schema is fixed** for the lifetime of the job.
@@ -75,7 +75,7 @@ All these scenarios require complex workarounds and a way to 
**automatically res
 Here’s the new architecture:
 
 <div style="text-align: center;">
-<img 
src="/img/blog/2025-10-03-kafka-dynamic-iceberg-sink/dynamic-iceberg-sink.png" 
style="width:70%;margin:15px">
+<img 
src="/img/blog/2025-10-14-kafka-dynamic-iceberg-sink/dynamic-iceberg-sink.png" 
style="width:70%;margin:15px">
 </div>
 
 This single, unified pipeline can ingest from any number of topics and write 
to any number of tables, automatically handling new topics and schema changes 
without restarts.

Reply via email to