This is an automated email from the ASF dual-hosted git repository.

leonardBang pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
     new 1990310135e [FLINK-39672][docs] Document Java records as supported 
POJO types
1990310135e is described below

commit 1990310135edfec7021f77e64dbb976100458909
Author: Mingliang Liu <[email protected]>
AuthorDate: Thu May 14 02:02:19 2026 -0700

    [FLINK-39672][docs] Document Java records as supported POJO types
    
    This closes  #28149.
    
    Co-authored-by: Claude Opus 4.7 (1M context) <[email protected]>
---
 docs/content.zh/docs/deployment/java_compatibility.md              | 3 +++
 .../datastream/fault-tolerance/serialization/schema_evolution.md   | 2 ++
 .../fault-tolerance/serialization/types_serialization.md           | 7 +++++++
 docs/content/docs/deployment/java_compatibility.md                 | 3 +++
 .../datastream/fault-tolerance/serialization/schema_evolution.md   | 2 ++
 .../fault-tolerance/serialization/types_serialization.md           | 7 +++++++
 6 files changed, 24 insertions(+)

diff --git a/docs/content.zh/docs/deployment/java_compatibility.md 
b/docs/content.zh/docs/deployment/java_compatibility.md
index 7d12484f48e..a8054cea3f8 100644
--- a/docs/content.zh/docs/deployment/java_compatibility.md
+++ b/docs/content.zh/docs/deployment/java_compatibility.md
@@ -46,6 +46,9 @@ The following Flink features have not been tested with Java 
11:
 We use Java 17 by default in Flink 2.0.0 and is the recommended Java version 
to run Flink on.
 This is the default version for docker images.
 
+Support for Java Records was added in Flink 1.19 
([FLINK-32380](https://issues.apache.org/jira/browse/FLINK-32380)).
+Java records are handled as [POJO types]({{< ref 
"docs/dev/datastream/fault-tolerance/serialization/types_serialization" 
>}}#pojos) and serialized via their canonical constructor.
+
 ### Untested Flink features
 
 These Flink features have not been tested with Java 17:
diff --git 
a/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md
 
b/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md
index f794d72e01f..8df6ddeb316 100644
--- 
a/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md
+++ 
b/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md
@@ -85,6 +85,8 @@ Flink 基于下面的规则来支持 [POJO 类型]({{< ref "docs/dev/datastream/
  3. 不可以修改字段的声明类型。
  4. 不可以改变 POJO 类型的类名,包括类的命名空间。
 
+上述规则同样适用于 [Java records]({{< ref 
"docs/dev/datastream/fault-tolerance/serialization/types_serialization" 
>}}#pojos),自 Flink 1.19 起 records 被作为 POJO 类型处理。
+
 需要注意,只有从 1.8.0 及以上版本的 Flink 生产的 savepoint 进行恢复时,POJO 类型的状态才可以进行升级。
 对 1.8.0 版本之前的 Flink 是没有办法进行 POJO 类型升级的。
 
diff --git 
a/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
 
b/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
index 23b1e8ec24c..22983eaee01 100644
--- 
a/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
+++ 
b/docs/content.zh/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
@@ -85,6 +85,9 @@ Java classes are treated by Flink as a special POJO data type 
if they fulfill th
 
 - The type of a field must be supported by a registered serializer.
 
+[Java 
records](https://docs.oracle.com/en/java/javase/17/language/records.html) are 
also recognized as POJO types since Flink 1.19 
([FLINK-32380](https://issues.apache.org/jira/browse/FLINK-32380)).
+A `public` record class is serialized by `PojoSerializer` through its 
canonical constructor; the no-argument constructor and getter/setter 
requirements above do not apply.
+
 POJOs are generally represented with a `PojoTypeInfo` and serialized with the 
`PojoSerializer` (using [Kryo](https://github.com/EsotericSoftware/kryo) as 
configurable fallback).
 The exception is when the POJOs are actually Avro types (Avro Specific 
Records) or produced as "Avro Reflect Types". 
 In that case the POJO's are represented by an `AvroTypeInfo` and serialized 
with the `AvroSerializer`.
@@ -309,6 +312,10 @@ conditions are fulfilled:
   or have a public getter- and a setter- method that follows the Java beans
   naming conventions for getters and setters.
 
+Java records are also recognized as POJO types.
+The class must still be `public`, but the no-argument constructor and 
getter/setter rules above are waived.
+Java records are instantiated via their canonical constructor.
+
 Note that when a user-defined data type can't be recognized as a POJO type, it 
must be processed as GenericType and
 serialized with Kryo.
 
diff --git a/docs/content/docs/deployment/java_compatibility.md 
b/docs/content/docs/deployment/java_compatibility.md
index 7d12484f48e..a8054cea3f8 100644
--- a/docs/content/docs/deployment/java_compatibility.md
+++ b/docs/content/docs/deployment/java_compatibility.md
@@ -46,6 +46,9 @@ The following Flink features have not been tested with Java 
11:
 We use Java 17 by default in Flink 2.0.0 and is the recommended Java version 
to run Flink on.
 This is the default version for docker images.
 
+Support for Java Records was added in Flink 1.19 
([FLINK-32380](https://issues.apache.org/jira/browse/FLINK-32380)).
+Java records are handled as [POJO types]({{< ref 
"docs/dev/datastream/fault-tolerance/serialization/types_serialization" 
>}}#pojos) and serialized via their canonical constructor.
+
 ### Untested Flink features
 
 These Flink features have not been tested with Java 17:
diff --git 
a/docs/content/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md
 
b/docs/content/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md
index 128633b5a4d..d5ea3c798ff 100644
--- 
a/docs/content/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md
+++ 
b/docs/content/docs/dev/datastream/fault-tolerance/serialization/schema_evolution.md
@@ -95,6 +95,8 @@ based on the following set of rules:
  3. Declared fields types cannot change.
  4. Class name of the POJO type cannot change, including the namespace of the 
class.
 
+The same rules apply to [Java records]({{< ref 
"docs/dev/datastream/fault-tolerance/serialization/types_serialization" 
>}}#pojos), which Flink treats as POJO types since Flink 1.19.
+
 Note that the schema of POJO type state can only be evolved when restoring 
from a previous savepoint with Flink versions
 newer than 1.8.0. When restoring with Flink versions older than 1.8.0, the 
schema cannot be changed.
 
diff --git 
a/docs/content/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
 
b/docs/content/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
index 532e07ab3c2..d72f995a16d 100644
--- 
a/docs/content/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
+++ 
b/docs/content/docs/dev/datastream/fault-tolerance/serialization/types_serialization.md
@@ -86,6 +86,9 @@ Java classes are treated by Flink as a special POJO data type 
if they fulfill th
 
 - The type of a field must be supported by a registered serializer.
 
+[Java 
records](https://docs.oracle.com/en/java/javase/17/language/records.html) are 
also recognized as POJO types since Flink 1.19 
([FLINK-32380](https://issues.apache.org/jira/browse/FLINK-32380)).
+A `public` record class is serialized by `PojoSerializer` through its 
canonical constructor; the no-argument constructor and getter/setter 
requirements above do not apply.
+
 POJOs are generally represented with a `PojoTypeInfo` and serialized with the 
`PojoSerializer` (using [Kryo](https://github.com/EsotericSoftware/kryo) as 
configurable fallback).
 The exception is when the POJOs are actually Avro types (Avro Specific 
Records) or produced as "Avro Reflect Types". 
 In that case the POJO's are represented by an `AvroTypeInfo` and serialized 
with the `AvroSerializer`.
@@ -310,6 +313,10 @@ conditions are fulfilled:
   or have a public getter- and a setter- method that follows the Java beans
   naming conventions for getters and setters.
 
+Java records are also recognized as POJO types.
+The class must still be `public`, but the no-argument constructor and 
getter/setter rules above are waived.
+Java records are instantiated via their canonical constructor.
+
 Note that when a user-defined data type can't be recognized as a POJO type, it 
must be processed as GenericType and
 serialized with Kryo.
 

Reply via email to