This is an automated email from the ASF dual-hosted git repository.
yuxia pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/fluss.git
The following commit(s) were added to refs/heads/main by this push:
new 690cab8bb [lake/paimon] Bump Paimon version to 1.3.1 (#2035)
690cab8bb is described below
commit 690cab8bb9e5bbd3fb1faf9ed4242283adf005f4
Author: Pei Yu <[email protected]>
AuthorDate: Thu Nov 27 11:41:00 2025 +0800
[lake/paimon] Bump Paimon version to 1.3.1 (#2035)
---
fluss-lake/fluss-lake-paimon/pom.xml | 2 +-
.../org/apache/fluss/lake/paimon/tiering/PaimonLakeCommitter.java | 6 +++++-
fluss-lake/fluss-lake-paimon/src/main/resources/META-INF/NOTICE | 2 +-
pom.xml | 2 +-
website/docs/maintenance/tiered-storage/lakehouse-storage.md | 4 ++--
website/docs/quickstart/lakehouse.md | 4 ++--
website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md | 6 +++---
7 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/fluss-lake/fluss-lake-paimon/pom.xml
b/fluss-lake/fluss-lake-paimon/pom.xml
index 9863208dd..7d66e8a9d 100644
--- a/fluss-lake/fluss-lake-paimon/pom.xml
+++ b/fluss-lake/fluss-lake-paimon/pom.xml
@@ -33,7 +33,7 @@
<packaging>jar</packaging>
<properties>
- <paimon.version>1.2.0</paimon.version>
+ <paimon.version>1.3.1</paimon.version>
</properties>
<dependencies>
diff --git
a/fluss-lake/fluss-lake-paimon/src/main/java/org/apache/fluss/lake/paimon/tiering/PaimonLakeCommitter.java
b/fluss-lake/fluss-lake-paimon/src/main/java/org/apache/fluss/lake/paimon/tiering/PaimonLakeCommitter.java
index 6408e4a81..8a7c2716b 100644
---
a/fluss-lake/fluss-lake-paimon/src/main/java/org/apache/fluss/lake/paimon/tiering/PaimonLakeCommitter.java
+++
b/fluss-lake/fluss-lake-paimon/src/main/java/org/apache/fluss/lake/paimon/tiering/PaimonLakeCommitter.java
@@ -31,6 +31,7 @@ import org.apache.paimon.catalog.Catalog;
import org.apache.paimon.manifest.IndexManifestEntry;
import org.apache.paimon.manifest.ManifestCommittable;
import org.apache.paimon.manifest.ManifestEntry;
+import org.apache.paimon.manifest.SimpleFileEntry;
import org.apache.paimon.operation.FileStoreCommit;
import org.apache.paimon.table.FileStoreTable;
import org.apache.paimon.table.sink.CommitCallback;
@@ -224,7 +225,10 @@ public class PaimonLakeCommitter implements
LakeCommitter<PaimonWriteResult, Pai
@Override
public void call(
- List<ManifestEntry> list, List<IndexManifestEntry> indexFiles,
Snapshot snapshot) {
+ List<SimpleFileEntry> baseFiles,
+ List<ManifestEntry> deltaFiles,
+ List<IndexManifestEntry> indexFiles,
+ Snapshot snapshot) {
currentCommitSnapshotId.set(snapshot.id());
}
diff --git a/fluss-lake/fluss-lake-paimon/src/main/resources/META-INF/NOTICE
b/fluss-lake/fluss-lake-paimon/src/main/resources/META-INF/NOTICE
index c2d4c8a6d..55fd0038b 100644
--- a/fluss-lake/fluss-lake-paimon/src/main/resources/META-INF/NOTICE
+++ b/fluss-lake/fluss-lake-paimon/src/main/resources/META-INF/NOTICE
@@ -6,4 +6,4 @@ The Apache Software Foundation (http://www.apache.org/).
This project bundles the following dependencies under the Apache Software
License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt)
-- org.apache.paimon:paimon-bundle:1.2.0
+- org.apache.paimon:paimon-bundle:1.3.1
diff --git a/pom.xml b/pom.xml
index 8c7cd8c4a..dc40c4ace 100644
--- a/pom.xml
+++ b/pom.xml
@@ -88,7 +88,7 @@
<curator.version>5.4.0</curator.version>
<netty.version>4.1.104.Final</netty.version>
<arrow.version>15.0.0</arrow.version>
- <paimon.version>1.2.0</paimon.version>
+ <paimon.version>1.3.1</paimon.version>
<iceberg.version>1.9.1</iceberg.version>
<fluss.hadoop.version>2.10.2</fluss.hadoop.version>
diff --git a/website/docs/maintenance/tiered-storage/lakehouse-storage.md
b/website/docs/maintenance/tiered-storage/lakehouse-storage.md
index bcce00cde..bd7b646d0 100644
--- a/website/docs/maintenance/tiered-storage/lakehouse-storage.md
+++ b/website/docs/maintenance/tiered-storage/lakehouse-storage.md
@@ -35,7 +35,7 @@ datalake.paimon.metastore: filesystem
datalake.paimon.warehouse: /tmp/paimon
```
-Fluss processes Paimon configurations by removing the `datalake.paimon.`
prefix and then use the remaining configuration (without the prefix
`datalake.paimon.`) to create the Paimon catalog. Checkout the [Paimon
documentation](https://paimon.apache.org/docs/1.1/maintenance/configurations/)
for more details on the available configurations.
+Fluss processes Paimon configurations by removing the `datalake.paimon.`
prefix and then use the remaining configuration (without the prefix
`datalake.paimon.`) to create the Paimon catalog. Checkout the [Paimon
documentation](https://paimon.apache.org/docs/1.3/maintenance/configurations/)
for more details on the available configurations.
For example, if you want to configure to use Hive catalog, you can configure
like following:
```yaml
@@ -65,7 +65,7 @@ Then, you must start the datalake tiering service to tier
Fluss's data to the la
you should download the corresponding [Fluss filesystem
jar](/downloads#filesystem-jars) and also put it into `${FLINK_HOME}/lib`
- Put [fluss-lake-paimon
jar](https://repo1.maven.org/maven2/org/apache/fluss/fluss-lake-paimon/$FLUSS_VERSION$/fluss-lake-paimon-$FLUSS_VERSION$.jar)
into `${FLINK_HOME}/lib`
- [Download](https://flink.apache.org/downloads/) pre-bundled Hadoop jar
`flink-shaded-hadoop-2-uber-*.jar` and put into `${FLINK_HOME}/lib`
-- Put Paimon's [filesystem
jar](https://paimon.apache.org/docs/1.1/project/download/) into
`${FLINK_HOME}/lib`, if you use s3 to store paimon data, please put `paimon-s3`
jar into `${FLINK_HOME}/lib`
+- Put Paimon's [filesystem
jar](https://paimon.apache.org/docs/1.3/project/download/) into
`${FLINK_HOME}/lib`, if you use s3 to store paimon data, please put `paimon-s3`
jar into `${FLINK_HOME}/lib`
- The other jars that Paimon may require, for example, if you use HiveCatalog,
you will need to put hive related jars
diff --git a/website/docs/quickstart/lakehouse.md
b/website/docs/quickstart/lakehouse.md
index 622792190..a9f42a774 100644
--- a/website/docs/quickstart/lakehouse.md
+++ b/website/docs/quickstart/lakehouse.md
@@ -117,7 +117,7 @@ The Docker Compose environment consists of the following
containers:
- **Flink Cluster**: a Flink `JobManager` and a Flink `TaskManager` container
to execute queries.
**Note:** The `apache/fluss-quickstart-flink` image is based on
[flink:1.20.3-java17](https://hub.docker.com/layers/library/flink/1.20-java17/images/sha256:296c7c23fa40a9a3547771b08fc65e25f06bc4cfd3549eee243c99890778cafc)
and
-includes the [fluss-flink](engine-flink/getting-started.md),
[paimon-flink](https://paimon.apache.org/docs/1.0/flink/quick-start/) and
+includes the [fluss-flink](engine-flink/getting-started.md),
[paimon-flink](https://paimon.apache.org/docs/1.3/flink/quick-start/) and
[flink-connector-faker](https://flink-packages.org/packages/flink-faker) to
simplify this guide.
3. To start all containers, run:
@@ -136,7 +136,7 @@ You can also visit http://localhost:8083/ to see if Flink
is running normally.
:::note
- If you want to additionally use an observability stack, follow one of the
provided quickstart guides [here](maintenance/observability/quickstart.md) and
then continue with this guide.
-- If you want to run with your own Flink environment, remember to download the
[fluss-flink connector jar](/downloads),
[flink-connector-faker](https://github.com/knaufk/flink-faker/releases),
[paimon-flink connector
jar](https://paimon.apache.org/docs/1.0/flink/quick-start/) and then put them
to `FLINK_HOME/lib/`.
+- If you want to run with your own Flink environment, remember to download the
[fluss-flink connector jar](/downloads),
[flink-connector-faker](https://github.com/knaufk/flink-faker/releases),
[paimon-flink connector
jar](https://paimon.apache.org/docs/1.3/flink/quick-start/) and then put them
to `FLINK_HOME/lib/`.
- All the following commands involving `docker compose` should be executed in
the created working directory that contains the `docker-compose.yml` file.
:::
diff --git a/website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md
b/website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md
index 6e1462435..cba9def87 100644
--- a/website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md
+++ b/website/docs/streaming-lakehouse/integrate-data-lakes/paimon.md
@@ -73,7 +73,7 @@ You can choose between two views of the table:
#### Read Data Only in Paimon
##### Prerequisites
-Download the [paimon-flink.jar](https://paimon.apache.org/docs/1.2/) that
matches your Flink version, and place it in the `FLINK_HOME/lib` directory
+Download the [paimon-flink.jar](https://paimon.apache.org/docs/1.3/) that
matches your Flink version, and place it in the `FLINK_HOME/lib` directory
##### Read Paimon Data
To read only data stored in Paimon, use the `$lake` suffix in the table name.
The following example demonstrates this:
@@ -92,7 +92,7 @@ SELECT * FROM orders$lake$snapshots;
When you specify the `$lake` suffix in a query, the table behaves like a
standard Paimon table and inherits all its capabilities.
This allows you to take full advantage of Flink's query support and
optimizations on Paimon, such as querying system tables, time travel, and more.
-For further information, refer to Paimon’s [SQL Query
documentation](https://paimon.apache.org/docs/0.9/flink/sql-query/#sql-query).
+For further information, refer to Paimon’s [SQL Query
documentation](https://paimon.apache.org/docs/1.3/flink/sql-query/#sql-query).
#### Union Read of Data in Fluss and Paimon
@@ -125,7 +125,7 @@ Key behavior for data retention:
### Reading with other Engines
-Since the data tiered to Paimon from Fluss is stored as a standard Paimon
table, you can use any engine that supports Paimon to read it. Below is an
example using
[StarRocks](https://paimon.apache.org/docs/1.2/ecosystem/starrocks/):
+Since the data tiered to Paimon from Fluss is stored as a standard Paimon
table, you can use any engine that supports Paimon to read it. Below is an
example using
[StarRocks](https://paimon.apache.org/docs/1.3/ecosystem/starrocks/):
First, create a Paimon catalog in StarRocks: