This is an automated email from the ASF dual-hosted git repository.

lzljs3620320 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/paimon.git


The following commit(s) were added to refs/heads/master by this push:
     new 7a39013561 [doc] Add changelog merging into changelog-producer
7a39013561 is described below

commit 7a390135617a86906f172a391a182a9b3b3dae04
Author: Jingsong <[email protected]>
AuthorDate: Wed Nov 27 10:24:40 2024 +0800

    [doc] Add changelog merging into changelog-producer
---
 docs/content/maintenance/write-performance.md        |  9 ---------
 docs/content/primary-key-table/changelog-producer.md | 11 +++++++++++
 2 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/docs/content/maintenance/write-performance.md 
b/docs/content/maintenance/write-performance.md
index 02362b9096..ade2c3353e 100644
--- a/docs/content/maintenance/write-performance.md
+++ b/docs/content/maintenance/write-performance.md
@@ -160,12 +160,3 @@ You can use fine-grained-resource-management of Flink to 
increase committer heap
 1. Configure Flink Configuration 
`cluster.fine-grained-resource-management.enabled: true`. (This is default 
after Flink 1.18)
 2. Configure Paimon Table Options: `sink.committer-memory`, for example 300 
MB, depends on your `TaskManager`.
    (`sink.committer-cpu` is also supported)
-
-## Changelog Compaction
-
-If Flink's checkpoint interval is short (for example, 30 seconds) and the 
number of buckets is large,
-each snapshot may produce lots of small changelog files.
-Too many files may put a burden on the distributed storage cluster.
-
-In order to compact small changelog files into large ones, you can set the 
table option `changelog.precommit-compact = true`.
-Default value of this option is false, if true, it will add a compact 
coordinator and worker operator after the writer operator, which copies 
changelog files into large ones.
diff --git a/docs/content/primary-key-table/changelog-producer.md 
b/docs/content/primary-key-table/changelog-producer.md
index 011f7b6f27..a9364ee9f0 100644
--- a/docs/content/primary-key-table/changelog-producer.md
+++ b/docs/content/primary-key-table/changelog-producer.md
@@ -130,3 +130,14 @@ efficient as the input changelog producer and the latency 
to produce changelog m
 
 Full-compaction changelog-producer supports 
`changelog-producer.row-deduplicate` to avoid generating -U, +U
 changelog for the same record.
+
+## Changelog Merging
+
+For `input`, `lookup`, `full-compaction` 'changelog-producer'.
+
+If Flink's checkpoint interval is short (for example, 30 seconds) and the 
number of buckets is large, each snapshot may
+produce lots of small changelog files. Too many files may put a burden on the 
distributed storage cluster.
+
+In order to compact small changelog files into large ones, you can set the 
table option `changelog.precommit-compact = true`.
+Default value of this option is false, if true, it will add a compact 
coordinator and worker operator after the writer
+operator, which copies changelog files into large ones.

Reply via email to