This is an automated email from the ASF dual-hosted git repository.
ipolyzos pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/fluss.git
The following commit(s) were added to refs/heads/main by this push:
new d3102b90 [blog] Fix text format of "Tiering Service Deep Dive" (#1411)
d3102b90 is described below
commit d3102b90a66e67dd99b1611bf57324e1ec7bfd4a
Author: Liebing <[email protected]>
AuthorDate: Sun Aug 3 01:22:17 2025 +0800
[blog] Fix text format of "Tiering Service Deep Dive" (#1411)
Co-authored-by: Liebing <[email protected]>
---
website/blog/2025-07-01-tiering-service.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/website/blog/2025-07-01-tiering-service.md
b/website/blog/2025-07-01-tiering-service.md
index 7dc4f253..2e69da1f 100644
--- a/website/blog/2025-07-01-tiering-service.md
+++ b/website/blog/2025-07-01-tiering-service.md
@@ -133,7 +133,7 @@ The **TieringSourceReader** pulls assigned splits from the
enumerator, uses a `T
- **LogScanner** for `TieringLogSplit` (append-only tables)
- **BoundedSplitReader** for `TieringSnapshotSplit` (primary-keyed tables)
- **Data Fetch:** The chosen reader fetches the records defined by the split’s
offset or snapshot boundaries from the Fluss server.
-- **Lake Writing"** Retrieved records are handed off to the lake writer, which
persists them into the data lake.
+- **Lake Writing:** Retrieved records are handed off to the lake writer, which
persists them into the data lake.
By cleanly separating split assignment, reader selection, data fetching, and
lake writing, the TieringSourceReader ensures scalable, parallel ingestion of
streaming and snapshot data into your lakehouse.
@@ -157,7 +157,7 @@ public interface LakeTieringFactory {
- **createLakeWriter(WriterInitContext)**: builds a `LakeWriter` to convert
Fluss rows into the target table format.
- **getWriteResultSerializer()**: supplies a serializer for the writer’s
output.
- **createLakeCommitter(CommitterInitContext)**: constructs a `LakeCommitter`
to finalize and atomically commit data files.
-- **getCommittableSerializer()**: provides a serializer for committable
tokens.```
+- **getCommittableSerializer()**: provides a serializer for committable tokens.
By default, Fluss includes a Paimon-backed tiering factory; Iceberg support is
coming soon. Once the `TieringSourceReader` writes a batch of records through
the `LakeWriter`, it emits the resulting write metadata downstream to the
**TieringCommitOperator**, which then commits those changes both in the
lakehouse and back to the Fluss cluster.